Virtualization Archives - Thomas LaRock https://thomaslarock.com/category/virtualization/ Thomas LaRock is an author, speaker, data expert, and SQLRockstar. He helps people connect, learn, and share. Along the way he solves data problems, too. Thu, 07 Dec 2017 13:38:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://thomaslarock.com/wp-content/uploads/2015/07/gravatar.jpg Virtualization Archives - Thomas LaRock https://thomaslarock.com/category/virtualization/ 32 32 18470099 What’s In Your Database? https://thomaslarock.com/2017/12/whats-in-your-database/ https://thomaslarock.com/2017/12/whats-in-your-database/#respond Thu, 07 Dec 2017 13:38:31 +0000 https://thomaslarock.com/?p=18206 I think we should all take time to ask ourselves the simple question, "What's in your database?" 

The post What’s In Your Database? appeared first on Thomas LaRock.

]]>
what's in your walletRecently, while settling in to watch television with my family, we were treated to an amusing advertisement for a credit card company. This particular commercial showed men dressed as Vikings asking questions about the contents of my wallet. My son laughed then turned to me and asked the obvious question, “Papa, what’s in your wallet?”

“Nothing,” was my answer, and it’s true. Ever since direct deposit became a thing I find I never have any cash in my wallet, even while traveling. My wallet has a handful of credit cards, insurance cards, and a picture of my daughter taken when she was a few hours old.

As I thought about the minimal contents of my wallet, and the things inside I had forgotten, I began to ponder about data, databases, and data security. I know everyone collects and stores data without truly understanding its importance or value. We have all seen data forgotten, neglected, and misused.

I think we should all take time to ask ourselves the simple question, “What’s in your database?”

What’s In Your Database?

Data is the obvious answer, of course. But there are many different types of data inside of a database, some more sensitive than others. I’m not just talking about data types like INT or VARCHAR. No, I’m talking about data that can be classified as personally identifiable information (PII). This is data that contains a unique identifier from which the identity of a specific person is obtained.

PII data has existed for centuries (thank you, US Census!), albeit not in digital form. To some degree, awareness of this data’s value has increased. This is evident in the number of security measures offered and deployed by many companies as they try to protect their data and databases, such as encryption, access controls, permissions, password policies, and securing backups.

But has awareness of its value increased enough? No, I don’t think so; and this will become even truer as the Internet of Things (IoT) begins to take hold. The evidence is that despite all of the security measures companies have adopted to protect their data and databases, we still have data breaches.

Why does this continue to happen?

As I said, I suspect it is because people don’t understand the true value and importance of their data and databases. A database isn’t just a container for your data. A database contains the most precious business asset any company can have. If you don’t have data, you don’t have a business.

Security is a Shared Responsibility

I’ve written before about how security is a shared responsibility. Last week at AWs re:Invent I was ecstatic to see and hear AWS CTO Werner Vogels spend a dedicated amount of time during his keynote to talk directly about data security (about 45:38 in here:)

“Protecting your customer should be your number one priority”. Preach.

And yet, data breaches will continue until we are able to create a deep appreciation for business data. We need to guard business data as closely as our own wallets.

AWS knows this, and that’s why they’ve started rolling out enhanced security features. They needed to do this to keep pace with what Microsoft has been doing for years already. Check out the long list of security features in Azure, many of which were rolled out ahead of similar AWS offerings.

Good security comes from good people. Humans are the weakest link in the data chain. In fact, humans have been known to give away their passwords in exchange for a cheap pen or a chocolate bar.

We must do better.

Dance like no one is watching, encrypt like everyone is

If someone told you that you might lose your wallet, you’d go out of your way to keep it secure. You should have the same mindset with your data. If concerned about losing your wallet, you’d move it to a front pocket, keep your hand on it, or minimize the contents inside so that if it were lost or stolen you are able to recover quickly. For data, this means making certain you have effective monitoring, logging, and auditing tools in place, as well as effective security measures such as encrypting data at rest, in use, and in flight.

And if you lost your wallet you wouldn’t wait months to tell someone. You’d act quickly. The same should go for your data. The moment you discover a breach you need to disclose the breach to minimize damage and losses.

Only by truly understanding and appreciating the value of our data and databases and motivating everyone to take these steps will we see the necessary diligence needed to protect data from theft. Maybe we need our own commercial with IT professionals dressed as Vikings. That might help get the point across.

The post What’s In Your Database? appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2017/12/whats-in-your-database/feed/ 0 18206
Book Review: VMware vSphere 6.5 Host Resources Deep Dive https://thomaslarock.com/2017/08/book-review-vmware-vsphere-65-host-resources-deep-dive/ https://thomaslarock.com/2017/08/book-review-vmware-vsphere-65-host-resources-deep-dive/#respond Fri, 04 Aug 2017 16:56:34 +0000 https://thomaslarock.com/?p=17992 My friend David Klee recommended that I add the book VMware vSphere 6.5 Host Resources Deep Dive to my bookshelf.

The post Book Review: VMware vSphere 6.5 Host Resources Deep Dive appeared first on Thomas LaRock.

]]>
My friend David Klee (blog | @kleegeek) recommended that I add the book VMware vSphere 6.5 Host Resources Deep Dive to my bookshelf.

(Wait. Did you know I have a bookshelf? Well, now you do. It’s filled with lots of good database-centric reference material that I’ve collected over the years. Have a look and bookmark it for future use. I add to the bookshelf often as I try to keep it current. In fact, you can probably expect I expand it a bit to include some books on data science. You know, in case you are into that sort of thing.)

If you are responsible for the administration of database servers running inside of VMware then you will want a copy of this book. The book is aimed at the VMware admin audience, not a database audience. But anyone that administers databases would find this information valuable. If you fancy yourself a database tuning expert or have databases in VMware, then you will want this book.

Topics inside the book that would be of interest to DBAs include:

vNUMA; advanced balancing, optimization, memory speeds
CPU core counts versus clock speed
vSphere balanced power management
Queues and where they live inside the end-to-end storage data paths

If you are reading this then you are likely on a device that has the ability to order this book from Amazon right now. So, you should do that, because it’s that good.

 

The post Book Review: VMware vSphere 6.5 Host Resources Deep Dive appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2017/08/book-review-vmware-vsphere-65-host-resources-deep-dive/feed/ 0 17992
HOW TO: Improve Database Performance Without Changing Code https://thomaslarock.com/2015/10/improve-database-performance-without-changing-code/ https://thomaslarock.com/2015/10/improve-database-performance-without-changing-code/#comments Tue, 06 Oct 2015 19:58:17 +0000 http://thomaslarock.com/?p=17084 I’ve stated before that great database performance starts with great database design. So, if you want a great database design you must find someone with great database experience. But where does a person get such experience? We already know that great judgment comes from great experience, and great experience comes from bad judgment. That means ... Read more

The post HOW TO: Improve Database Performance Without Changing Code appeared first on Thomas LaRock.

]]>
great-db-rzI’ve stated before that great database performance starts with great database design. So, if you want a great database design you must find someone with great database experience. But where does a person get such experience?

We already know that great judgment comes from great experience, and great experience comes from bad judgment. That means great database experience is the result of bad judgment repeated over the course of many painful years.

So I am here today to break this news to you. Your database design stinks.

There, I said. But someone had to be the one to tell you. I know this is true because I see many bad database designs out in the wild, and someone is creating them. So I might as well point my finger in your direction, dear reader.

We all wish we could change the design or code but there are times when it is not possible to make changes. As database usage patterns push horrible database designs to their performance limits database administrators are then handed an impossible task: Make performance better but don’t touch anything.

Imagine that you take your car to a mechanic for an oil change. You tell the mechanic they can’t touch the car in any way, not even open the hood. Oh, and you need it done in less than an hour. Silly, right? That is just as silly as when you go to your database administrator and say: “we need you to make this query faster and you can’t touch the code”.

Lucky for us the concept of “throwing money at the problem” is not new as shown by this ancient IBM commercial.

Of course throwing money at the problem does not always solve the performance issue. This is the result of not knowing what the issue is to begin with. You don’t want to be the one to spend six figures on new hardware to solve an issue with query blocking.

Even after ordering the new hardware it takes time before arrival, installation, and the issue resolved. What can you do in the meantime to improve performance without touching code?

I put together this list of items to help you fix database performance issues without touching code. Use this as a checklist to research and take action upon before blaming code. Some of these items cost no money, but some items (such as buying flash drives) might. What I wanted to do was to provide a starting point for things you can research and do yourself.

As always: You’re welcome.

Examine your plan cache

If you need to tune queries then you need to know what queries have run against your instance. A quick way to get such details is to look inside the plan cache. I’ve written before about how the plan cache is the junk drawer of SQL Server. Mining your plan cache for performance data can help you yield improvements such as optimizing for ad-hoc workloads, estimating the correct cost threshold for parallelism, or which queries are using a specific index. Speaking of indexes…

Review your index maintenance

Assuming you are doing this already, but if not then now is the time to get started. You can use maintenance plans, roll your own scripts, or use scripts provided by some SQL Server MVPs. Whatever method you choose, make certain you are rebuilding, reorganizing, and updating statistics only when necessary. I’d even tell you to take time to review for duplicate indexes and get those removed.

Index maintenance is crucial for query performance. Indexes help reduce the amount of data that searched and pulled back to complete a request. But there is another item that can reduce the size of the data searched and pulled through the network wires…

Review your archiving strategy

Chances are you don’t have any archiving strategy in place. I know because we are data hoarders by nature, and only now starting to realize the horrors of such things. Archiving data implies less data, and less data means faster query performance. One way to get this done is to consider partitioning. (Yeah, yeah, I know I said no code changes; this is a schema change to help the logical distribution of data on physical disk. In other words, no changes to existing application code.)

Partitioning requires some work on your end, and it will increase your administrative overhead. Your backup and recovery strategy must change to reflect the use of more files and filegroups. If this isn’t something you want to take on then instead you may instead want to consider…

Enable page or row compression

Another option for improving performance is data compression at the page or row level. The tradeoff for data compression is an increase in CPU usage. Make certain you perform testing to verify the benefits outweigh the extra cost. For tables that have a low amount of updates and a high amount of full scans then data compression is a decent option. Here is the SQL 2008 Best Practices whitepaper on data compression which describes in detail the different types of workloads and estimated savings.

But, if you already know your workload to that level of detail, then maybe a better option for you might be…

Change your storage configuration

Often this is not an easy option, if at all. You can’t just wish for a piece of spinning rust on your SAN to go faster. But technology such as Windows Storage Spaces and VMWare’s VSAN make it easy for administrators to alter storage configurations to improve performance. At VMWorld in San Francisco I talked about how VSAN technology is the magic pixie dust of software defined storage right now.

If you don’t have magic pixie dust then SSDs are an option, but changing storage configuration only makes sense if you know that disk is your bottleneck. Besides, you might be able to avoid reconfiguring storage by taking steps to distribute your I/O across many drives with…

Use distinct storage devices for data, logs, and backups

These days I see many storage admins configuring database servers to use one big RAID 10, or OBR10 for short. For a majority of systems out there the use of OBR10 will suffice for performance. But there are times you will find you have a disk bottleneck as a result of all the activity hitting the array at

8 Tips for Faster SQL Server Performance: Without the Expense of Over-provisioningonce. Your first step is then to separate out the database data, log, and backup files onto distinct drives. Database backups should be off the server. Put your database transaction log files onto a different physical array. Doing so will reduce your chance for data loss. After all, if everything is on one array, then when that array fails you will have lost everything.

Another option is to break out tempdb onto distinct array as well. In fact, tempdb deserves its own section here…

Optimize tempdb for performance

Of course this is only worth the effort if tempdb is found to be the bottleneck. Since tempdb is a shared resource amongst all the databases on the instance it can be a source of contention. That is why we have lots of information on how to optimize tempdb for performance as well as trace flags.

We operate in a world of shared resources, so finding tempdb being a shared resource is not a surprise. Storage, for example, is a shared resource. So are the series of tubes that makes up your network. And if the database server is virtualized (as it should be these days) then you are already living in a completely shared environment. So why not try…

Increase the amount of physical RAM available

Of course, this only makes sense if you are having a memory issue. Increasing the amount of RAM is easy for a virtual machine when compared to having to swap out a physical chip. OK, swapping out a chip isn’t that hard either, but you have to buy one, then get up to get the mail, and then bring it to the data center, and…you get the idea.

When adding memory to your VM one thing to be mindful about is if your host is using vNUMA. If so, then it could be the case that adding more memory may result in performance issues for some systems. So, be mindful about this and know what to look for (link).

Memory is an easy thing to add to any VM. Know what else is easy to add on to a VM?

Increase the number of CPU cores

Again, this is only going to help if you have identified that CPU is the bottleneck. You may want to consider swapping out the CPUs on the host itself if you can get a boost in performance speeds. But adding physical hardware such as a CPU, same as with adding memory, may take too long to physically complete. That’s why VMs are great, as you can make modifications in a short amount of time.

Since we are talking about CPUs I would also mention to examine the Windows power plan settings, this is a known issue for database servers. But even with virtualized servers resources such as CPU and memory are not infinite…

Reconfigure VM allocations

Many performance issues on virtualized database servers are the result of the host being over-allocated. Over-allocation by itself is not bad. But over-allocation leads to over-commit, and over-commit is when you see performance hits. You should be conservative with your initial allocation of vCPU resources when rolling out VMs on a host. Aim for a 1.5:1 ratio of vCPU to logical cores and adjust upwards from there always paying attention to overall host CPU utilization. For RAM you should stay below 80% total allocation, as that allows room for growth and migrations as needed.

You should also take a look at how your network is configured. Your environment should be configured for multi-pathing. Also, know your current HBA queue depth, and what values you want.

Summary

We’ve all had times where we’ve been asked to fix performance issues without changing code. The items listed above are options for you to examine and explore in your effort to improve performance before changing code. Of course it helps if you have an effective database performance monitoring solution in place to help you make sense of your environment. You need to have performance metrics and baselines in place before you start turning any “nerd knobs”, otherwise you won’t know if you are have a positive impact on performance no matter which option you choose.

With the right tools in place collecting performance metrics you can then understand which resource is the bottleneck (CPU, memory, disk, network). Then you can try one or more of the options above. And then you can add up the amount of money you saved on new hardware and put that on your performance review.

The post HOW TO: Improve Database Performance Without Changing Code appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2015/10/improve-database-performance-without-changing-code/feed/ 10 17084
Get All Endpoints for VMs in an Azure Subscription https://thomaslarock.com/2015/05/get-all-endpoints-for-vms-in-an-azure-subscription/ https://thomaslarock.com/2015/05/get-all-endpoints-for-vms-in-an-azure-subscription/#comments Tue, 12 May 2015 13:32:52 +0000 http://thomaslarock.com/?p=12070 I wrote a post recently about troubleshooting connectivity for endpoints on Microsoft Azure VMs. The day the post went out I was greeted with this tweet: http://t.co/Ww3MyYjwPu right on time context post for me @SQLRockstar tx sir. Need to see on ports for my #linux vms with #mysql running 🙂 — Shyam Viking (@myluvsql) April 30, ... Read more

The post Get All Endpoints for VMs in an Azure Subscription appeared first on Thomas LaRock.

]]>
I wrote a post recently about troubleshooting connectivity for endpoints on Microsoft Azure VMs. The day the post went out I was greeted with this tweet:

So then I did what I usually do: I let my mouth (in this case, fingers) get ahead of my brain. Here was an opportunity for me to do more work! I answered the tweet with:

Feeling like my Powershell script wasn’t getting the job done here I decided to pull together the code necessary to get all endpoints for VMs in an Azure subscription. So that’s what we have here. You’re welcome. As always, here is the usual disclaimer:

Script disclaimer, for people who need to be told this sort of thing:

DISCLAIMER: Do not run code you find on the internet in your production environment without testing it first. Do not use this code if your vision becomes blurred. Seek medical attention if this code runs longer than four hours. On rare occasions this code has been known to cause one or more of the following: nausea, headaches, high blood pressure, popcorn cravings, and the impulse to reformat tabs into spaces. If this code causes your servers to smoke, seek shelter. Do not taunt this code.

You can also download a copy of the Powershell script here.

<############################################## File: GetAllEndpoints.ps1 Author: Thomas LaRock, https://thomaslarock.com/contact-me/ https://thomaslarock.com/2015/05/get-all-endpoints-for-vms-in-an-azure-subscription Summary: This script will loop through all the virtual machines in an Azure subscription and report on the assigned endpoints. Date: May 11th, 2015 You may alter this code for your own purposes. You may republish altered code as long as you give due credit. THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. ##############################################>

<# We are going to loop through all VM's in this subscription. However, if you want to filter for a subset, perhaps by name, you could use something like: #$VMlist = Get-AzureVM | Where-Object { ($_.Name -ilike “something”) } But we don't want to filter for our example, so we just grab all VMs and build an array #>

$VMlist = Get-AzureVM

<# We will now loop through each VM in the array #>

foreach ($VMServiceName in $VMlist) {

    $obj = Get-AzureVM -ServiceName $VMServiceName.ServiceName -Name $VMServiceName.Name | Get-AzureEndpoint 

    $Output = New-Object PSObject 
    $Output | Add-Member VMName $VMServiceName.Name    
    $Output | Add-Member EndpointNames $obj.Name   
    $Output | Add-Member Endpoints $obj.LocalPort

    Write-Output $Output 
    }

The Powershell script will output the details to the command window. Feel free to format the output as you see fit, I can imagine some might want to output to a text file. Of course, with Powershell you could output to Excel and create a donut chart if you wanted.

Enjoy!

The post Get All Endpoints for VMs in an Azure Subscription appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2015/05/get-all-endpoints-for-vms-in-an-azure-subscription/feed/ 1 12070
Troubleshooting Azure Connectivity: Ports and Endpoints https://thomaslarock.com/2015/04/troubleshooting-azure-connectivity-ports-and-endpoints/ https://thomaslarock.com/2015/04/troubleshooting-azure-connectivity-ports-and-endpoints/#comments Wed, 29 Apr 2015 19:17:35 +0000 http://thomaslarock.com/?p=12036 It was a simple enough question, or so I thought. One I felt should be either a simple “yes” or “no”. “Do we block remote desktop connections here?” Sure enough, I got back the quick and simple answer I expected, along with a question for myself. “Nope. We don’t block any RDP sessions. Maybe you ... Read more

The post Troubleshooting Azure Connectivity: Ports and Endpoints appeared first on Thomas LaRock.

]]>
It was a simple enough question, or so I thought. One I felt should be either a simple “yes” or “no”.

“Do we block remote desktop connections here?”

Sure enough, I got back the quick and simple answer I expected, along with a question for myself.

“Nope. We don’t block any RDP sessions. Maybe you configured your server wrong?”

I was in the SolarWinds Austin office trying to connect to one of my virtual machines running inside of Microsoft Azure. The remote desktop (RDP) session had worked fine from my home for weeks, and again from the hotel the night before. But now it didn’t work.

Knowing how to diagnose an issue is a skill you acquire with experience. I thought about all the possible ways this connection could be failing and the only difference I could find was the network. But here I was, being told that nothing was blocked, despite the evidence to the contrary.

Being the good DBA that I am I double-checked my work. I looked at the current port settings for this server in the Azure Portal. These ports are randomly assigned when the VM is created. For remote sessions the private port is 3389, but the public port was set to 54630:

endpoints

And I checked the port number being used in my RDP connection:

rdp

But the result stayed the same. After a minute or so I would get this message:

blocked

This appears to be a generic error message. There are no details, no links to documentation on how to troubleshoot possible connectivity errors. This error message is less than helpful. We are left to fend for ourselves at this point.

Instead of giving up I put my troubleshooting skills to work by breaking down each sentence.

“This computer can’t connect to the remote computer.”

Well, OK. But that doesn’t tell me if the issue is with me, is with the remote computer, or with something in between. We are using a cloud service (Azure) so it is always possible that communication failures may happen. I move on to look at the second sentence.

“Try connecting again.”

Definition of insanity: doing the same thing over and over and expecting a different result. Right now this error message is Groundhog Day for cloud admins. I move on to the last sentence.

“If the problem continues, contact the owner of the remote computer or your network administrator.”

Well, I’m the owner of this remote computer, and I already know I won’t be of much use to me. Unless you consider Microsoft to be the owner. And, in a way, they are the owner, but I don’t have the number for the data center handy.

But what about that network administrator? That’s a good, and as it turns out only, clue here. Could it be that there is, indeed, an issue blocking the port despite my being told that was not the case?

I went back to my colleague to ask more questions about the network. (That’s my way of politely writing “I went back to my colleague to blame the network”.) And this time I was greeted with the most brilliant of replies:

“We don’t block any ports here. Show me the error message.”

OK, maybe I call that brilliant because I’ve written before about showing me the error message. A picture is worth a thousand support tickets, so we went to my machine, I launched my RDP session, and it failed. The response at that point was this:

“Oh, why are you using that port? I doubt we are allowing non-default ports. Just use the default 3389 and see if that works.”

I was happy, confused, and frustrated all at the same time. Yeah, I was a typical user, the one with a case of PEBKAC.

“But you just said you weren’t blocking any-“

“And you said RDP wasn’t working. You never said you were using a different port. RDP works fine with the default port of 3389. So try 3389 and let’s see what happens.”

So, back to the Azure portal I went, updating the public port to be 3389, matching the private port. And then, trying RDP again, we see success:

success

Which then led to this exchange:

“I thought you said we didn’t block any ports!”

“What I meant was we don’t block the correct ports. Use the correct ports and you’ll be fine.”

This, dear reader, is what you call experience.

I’ve lost time before due to a firewall of one kind or another. My favorite all-time firewall issue was at TechEd in New Orleans in 2013 when the convention center was blocking port 1433. Ask Grant Fritchey (blog | @gfritchey) or me about that someday. Good times.

A few months from that trip to Austin I found myself at SQLBits, delivering a precon with Karen López (blog | @datachick). We’ve built out some VMs in Azure so that our attendees can put their hands on something because that’s what makes for a proper training experience.

Everything worked fine from the hotel the day before. Our scripts built and configured all the VMs in a matter of minutes. We could RDP to the machines without any trouble. Everything was working as expected.

Until it wasn’t.

When we got to the event the next day we were no longer able to RDP to our Azure VMs.

I was concerned I had somehow made a mistake with the port numbers. I set about double-checking them when an attendee approached me and suggested we should check the ports again. I was confused at first (probably because he was speaking British) but then I immediately understood what was being suggested: the conference center was blocking the non-default ports! Same as with Austin, if we switched to 3389, then RDP would work as expected.

So we set about manually updating each VM through the portal. And as I was updating each one it occurred to me that I should have a script for this in the future, should I find myself needing to quickly make changes to the RDP ports (any endpoints, really) on many VMs at the same time.

So, here is the script I cobbled together after SQLBits to help me for next time. You’re welcome. As always, here is the usual disclaimer:

Script disclaimer, for people who need to be told this sort of thing:

DISCLAIMERDo not run code you find on the internet in your production environment without testing it first. Do not use this code if your vision becomes blurred. Seek medical attention if this code runs longer than four hours. On rare occasions this code has been known to cause one or more of the following: nausea, headaches, high blood pressure, popcorn cravings, and the impulse to reformat tabs into spaces. If this code causes your servers to smoke, seek shelter. Do not taunt this code.

You can also download a copy of the script here.

<##############################################
    File: AlterEndpoints.ps1             
    Author: Thomas LaRock, https://thomaslarock.com/contact-me/
        https://thomaslarock.com/2015/04/troubleshooting-azure-connectivity-ports-and-endpoints        

    Summary: This script will loop through all the virtual machines
              in an Azure subscription. You can modify the script below
              to add, modify, or remove endpoints as needed.    

    Date: April 28th, 2015

    You may alter this code for your own purposes. You may republish
    altered code as long as you give due credit.

    THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY
    OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT
    LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR
    FITNESS FOR A PARTICULAR PURPOSE.
##############################################>

<# We are going to loop through all VM's in this subscription. However, if you want to filter for a subset, perhaps by name, you could use something like: #$VMlist = Get-AzureVM | Where-Object { ($_.Name -ilike "something") } But we don't want to filter for our example, so we just grab all VMs and build an array #>

$VMlist = Get-AzureVM

<# We will now loop through each VM in the array #>

foreach ($VMServiceName in $VMlist) {

    Get-AzureVM -ServiceName $VMServiceName.ServiceName –Name $VMServiceName.Name | Set-AzureEndpoint -Name "Remote Desktop" -PublicPort 3389 -LocalPort 3389 -Protocol "tcp" | Update-AzureVM
 
    }

If you wanted to add an endpoint to all your VMs that’s easy, you just use the following syntax:

Get-AzureVM -ServiceName $VMServiceName.ServiceName –Name $VMServiceName.Name | Add-AzureEndpoint -Name "Remote Desktop" -Protocol "tcp" -PublicPort 3389 -LocalPort 3389 | Update-AzureVM

If you wanted to remove an endpoint on all your VMs that’s easy too, you just use the following syntax:

Get-AzureVM –ServiceName $VMServiceName.ServiceName –Name $VMServiceName.Name | Remove-AzureEndpoint –Name "Remote Desktop" | Update-AzureVM

I even have a version of this script that can remove all endpoints from all VMs, but I won’t post it here because I’d be concerned someone ran that unwittingly. I would rather not be the enabler for someone bringing down hundreds of servers. But Denny Cherry (blog | @mrdenny) needed it one night so I put it together for him, and I know others may want it as well. If you want a copy of the code snippet, just drop me an email and I’ll send it to you.

Lesson here is that when working remotely you need to consider things like firewalls and blocked ports and be ready to quickly troubleshoot, diagnose, and remedy Azure connectivity issues.

And always blame the network.

The post Troubleshooting Azure Connectivity: Ports and Endpoints appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2015/04/troubleshooting-azure-connectivity-ports-and-endpoints/feed/ 4 12036
The Five DBA Food Groups https://thomaslarock.com/2014/11/five-dba-food-groups/ https://thomaslarock.com/2014/11/five-dba-food-groups/#comments Wed, 26 Nov 2014 19:12:33 +0000 http://thomaslarock.com/?p=11825 Tomorrow is Thanksgiving here in the USA. It’s a day where we are supposed to be thankful for what we have been given and instead we mostly use it as an excuse to eat obscene amounts of food, watch 10 hours of NFL action, and force retail workers to work longer hours in the name of ... Read more

The post The Five DBA Food Groups appeared first on Thomas LaRock.

]]>
5-DBA-Food-GroupsTomorrow is Thanksgiving here in the USA. It’s a day where we are supposed to be thankful for what we have been given and instead we mostly use it as an excuse to eat obscene amounts of food, watch 10 hours of NFL action, and force retail workers to work longer hours in the name of Capitalism.

With the emphasis on food this time of the year I was thinking if it were possible to classify “food groups” that every DBA should have as part of a balanced workload. Turns out the answer is “yes”, because I went ahead and did exactly that. I’m here today to introduce you to the five food groups every DBA needs. You’re welcome.

Of course we aren’t talking about real food here, and we are talking about tasks that every DBA needs to understand and perform weekly (or even daily). Next Tuesday, the 2nd of December, 2014, at 11AM CST (or UTC-6:00) I will be presenting a webinar with Karen López (blog | @datachick) titled “Solving the Accidental DBA Problem” for the PASS DBA Fundamentals Virtual Chapter.

As part of that talk we will dive into these food groups a bit more in an effort to help anyone who finds themselves suddenly thrust into the role of DBA for their shop. I’m excited to be building content again with Karen, and even more excited that I have the opportunity to get back into what I call “DBA Survivor” mode. Since writing my book four years ago I haven’t spent a lot of time in the area of professional development. I’m very thankful to have been given the opportunity to be a DBA, and even more thankful to have been given the opportunity to write a book to help others with the career as a data professional.

So, let’s take some time to not only review the DBA food groups, but let’s pair them with food we are likely to be eating tomorrow. Of course you don’t eat each food group separately, but all at the same time. It’s not really Thanksgiving if you just had stuffing, or just drank wine (except for Karen’s dinners, of course).

Discovery

Chances are if you are an accidental DBA you are not fully aware of each and every server and/or application you are responsible for. You need to take the time to figure out what exists in your environment. The best way to do this is to talk with as many people as possible, and not just your manager. Talk to the business end users, the server team, and developers. Find out what systems they are expecting you to be able to help with. You can also use 3rd party tools to help discover things like currently installed and running instances of SQL Server or even some other RDBMS that no one knows about.

For a food pairing, discovery is all about finding something on the table that you either don’t see often, or have never seen before, and trying a portion. For guests at our family dinners this usually means one thing: meat pie.

Recovery

If you have the letters “DBA” in your job description than you had better be able to recover data quickly whenever it is needed. Nothing will get you fired quicker from your job as a DBA than your inability to recover data for your business. Take the time to review the recovery plan for your shop, verify that your database backups are running, and test some restores to make certain that everything is working as expected.

Recovery is all about coming back when you think all is lost. For a food pairing, nothing will help you recover your appetite faster from a huge meal than some dessert, like pumpkin pie.

Performance

There is no question that many accidental DBAs focus on performance tuning and troubleshooting as their top priority, mostly as a result of these tasks offering the highest visibility for them across multiple groups. For accidental DBAs I make an effort to help them understand the importance of wait events, common DMVs, and proper index maintenance.

The best food pairing here is the inclusion of some vegetables. No, the mashed potatoes don’t count. Think leafy greens, like a salad, or spinach. They are going to help you perform better the next day, trust me.

Architecture

I see architecture playing a larger role even for accidental DBAs these days. Beyond database design there are questions about storage options, virtualization, high availability, and Cloud options. DBAs these days really need to be on top of a lot of architecture options in order to help businesses build reliable systems. In fact, I think the new job title should be “Cloud Database Architect”, as that better describes the role.

No question about this one, the food pairing here is the turkey itself. The turkey serves as the foundation for the entire meal. While the stuffing and/or potatoes are the performance, they can’t stand alone without the turkey. And like all good technical architecture your meal can be something other than turkey, perhaps you prefer a nice ham instead.

Security

Accidental DBAs need to have a deeper understanding of security than logins and permissions. These days a DBA has got to understand various types of encryption options, mitigation of risk with regards to data breaches, and how to effectively track permission changes over time. Failure to understand the implications of the security measures being proposed could result in your data being less secure than anyone realizes.

For a Thanksgiving meal pairing, nothing brings me greater security than a second glass of red wine. Of course since it is turkey someone will say that white wine is more appropriate but this is my blog post.

Have a Happy Thanksgiving!

The post The Five DBA Food Groups appeared first on Thomas LaRock.

]]>
https://thomaslarock.com/2014/11/five-dba-food-groups/feed/ 5 11825
SQL Server on vSphere Workshop at VMware https://thomaslarock.com/2014/09/sql-server-vsphere-workshop-vmware/ Mon, 15 Sep 2014 18:17:34 +0000 http://thomaslarock.com/?p=11715 This past week I was fortunate enough to have been selected to take part in an elite SQL Server on vSphere workshop at VMware headquarters in Palo Alto, CA. It was, without a doubt, the finest training event I can recall attending in my life. I’ve written previously, and received negative feedback about, my opinions for ... Read more

The post SQL Server on vSphere Workshop at VMware appeared first on Thomas LaRock.

]]>
SQL Server on vSphere Workshop at VMwareThis past week I was fortunate enough to have been selected to take part in an elite SQL Server on vSphere workshop at VMware headquarters in Palo Alto, CA. It was, without a doubt, the finest training event I can recall attending in my life.

I’ve written previously, and received negative feedback about, my opinions for what the word “training” means. So I was happy to see VMware do all the things I’ve said need to be done for a proper training class. The organizers managed to satisfy every attendee beyond expectations. I’m sitting here at the San Jose airport trying to figure out why the training this week resonated so well with everyone and I have a few ideas I wanted to share.

Pre-Qualified Audience

VMware recruited specific people to attend this session. There wasn’t an open call. You had to be considered the “best-of-the-best” in the field of virtualizing SQL Server. Thanks to the years I’ve spent helping customers to use Database Performance Analyzer with the VM option to tune their VMs running SQL Server I was chosen. By hand-picking the class attendees it made it easier to tailor the program for what we would want. Most of the common instructor led classes don’t have this luxury, they have to lecture to the people that sign up. But it wouldn’t take much to pre-qualify attendees by having them answer a few questions, perhaps survey style, after they express interest in attending.

No Marketecture

We had more than our share of PowerPoint slides this week. But the funny thing about the slides…they were wonderful! They were concise, to the point. and devoid of any “marketecture”. The speakers were engaging, actively soliciting feedback from the class. Many times the slides were an afterthought to the nature of the conversation. And that’s what a good slide deck should be, anyway, an afterthought. The slides should be there to support your conversation, not to serve as your entire lecture. The years I spent teaching mathematics I used exactly zero slides. I would lecture on the concepts and draw diagrams and formulas as a way to interact with the students. It’s a method that tends to work rather well for learning.

The Labs

Possibly my favorite part of the week were the hands on labs. This is what made this an actual training event as opposed to just a lecture. By putting my hands on the products I left at the end of the week with practical experience that I would not have had if I was just being lectured to for three days. But it wasn’t just the labs, it was how the labs were constructed. I’ve attended many labs that have detailed, step-by-step instructions on how to complete tasks. You know the type, “click here, go there, do this, now you’re awesome”. Because they knew they were presenting to an elite audience the labs this week had NONE of that crap. The labs this week just stated a task, without prompts. So for example we saw “create a datastore”, and we were left on our own to get that done. On top of that, we also worked in teams. Similar to the idea behind paired programming, we were placed into teams of three, allowing for greater interaction and learning. It was very clear that the folks at VMware put may, many hours of work into getting these labs built.

Guest Speakers

We had lots of guest speakers during the week. They included Pat Gelsinger, CEO of VMware as well as Matt Kixmoeller, VP of Marketing for PureStorage. We also got to meet with members of the dev teams at VMware, the marketing team, and the customer support team. In each case they spoke to us very open and candidly, trusting us with NDA-level details whenever appropriate. Having the guest speakers provide additional context on their visions of technology and market space really helped put the training content into perspective. Not once did I think to myself “why are they building this”, I already knew where they were heading, and why. On top of that, all the guest speakers seemed to be actively engaged in acquiring our feedback. This is especially true of the person that talked about working with a “very large database” that was less than 200GB. (Denny Cherry (blog | @mrdenny) just called that “adorable”.) And when we shared our experiences everyone from VMware seemed genuinely interested in what we had to share.

Cool Things

We got to see some very cool things this week. Some new products (like VSAN), ways to better use existing products, and even a ball game at AT&T park (home of the San Francisco Giants). Oh, and we had dinner on the VMware front lawn one night, too. For three straight days you were left thinking to yourself at one point or another “wow, that was cool.”

Conversations

I’ve already mentioned the guest speakers. The conversations with them were wonderful. But it was also the conversations with each other, too, that were wonderful to take part in. I’ve talked before about the idea of SQLFamily but it was on full display this week. At the end of the week I had someone point out to me that our group seemed to be very familiar with each other, to the point that we had known each other for years and simply traveled from one event to the next, like a wolf pack or something. Yeah, it’s a lot like that. The conversations this week were wonderful as we shared stories about what we’ve seen in our shops, what might work best in some cases, and what definitely would not work in any case.

I am amazed as to how fast the past three days went. There was very little down time. We had very short breaks. Even lunch didn’t last too long. We were up at 7AM and working by 7:30. Every minute was packed with training. Guests would come in to present. Conversations would happen. We’d explore our lab scenarios. We would have another speaker, more conversations, more learning. We were tasked to break the Pure Storage flash array (we didn’t break it, but as a group we were able to push it beyond what anyone was expecting to see). By keeping things constantly moving along we never had time to be distracted by the outside world. I’ve got three days of emails to get caught up on as well as a family that I’ve spoken only briefly with during a 10 minutes break two days ago. The fast pace and energy from the speakers really added to the overall experience.
I’m very happy to have taken part in this unique event put on by VMware. I’m hoping they continue with this program, as I can see real value in growing this to something on par with the Microsoft MVP program.

The post SQL Server on vSphere Workshop at VMware appeared first on Thomas LaRock.

]]>
11715