Write Data from PowerShell to Azure Table Storage

I worked on a project recently that wrote data from PowerShell into a CSV file.  The goal was to do real time trending based on the output, but I ran into an issue with file locks as PowerShell and the other program competed for access to the CSV.  That’s when I got the idea to write to Azure Table Storage instead of to a CSV.  The project didn’t work out for other reasons, but I did work out how to write data into Azure Table Storage instead from PowerShell.  This post is about how I did that.

The key to writing data to Table Storage is that it requires a Primary Key (pun intended.). This key consists of a time stamp (added on the server side), a partition key and a row key.  The partition key is used for load balancing as partitions from the same table can be spread across different resources.  The row key is a unique value added to the row.  The data I’m writing doesn’t contain a unique value so instead I simply generate a new GUID and use that for the row key.  More information can be found here.

With that, let’s configure the prerequisites.  First, you will need an Azure Subscription and a storage account.  Create a new table in the storage account, that is where the data will be written.  You will also need Azure Storage Explore.  This will be used to view date written to the Table.

Azure Storage Explorer requires an Access Key from the storage account. Keep this secure, this is essentially a full control password for the storage account.  Use this key and the storage account name to log into Azure Storage Explorer.

Next, generate a new Shared Access Signature (SAS) for the Storage Account.  Unlike the Storage Access Key. The SAS can be scoped to the type of storage, access rights, source IP and can have an expiration date.  At a minimum, the key will need to be scoped for Table storage with write access.

Now that prerequisites are set, let’s move onto the script.  The full version of the script can be found at my GitHub site here.  Start by defining the variables for your environment.  This includes the data gathered previously from the Storage Account.  There is also a Partition Key defined.  This is a requirement for Azure Table Storage.  This allows for load balancing as partitions can be split between resources.  I also defined the array that will be used for the data writing into table storage.

# Step 1, Set variables
# Enter Table Storage location data 
$storageAccountName = '<Enter Storage Account Here>'
$tableName = '<Enter Table Name Here>'
$sasToken = '<Enter SAS Token Here>'
$dateTime = get-date
$partitionKey = 'Svr1PerfData'
$processes = @()

Next create the connection context and connection string.  This is what PowerShell will use to log in and update Table Storage

# Step 2, Connect to Azure Table Storage
$storageCtx = New-AzureStorageContext -StorageAccountName $storageAccountName -SasToken $sasToken
$table = Get-AzureStorageTable -Name $tableName -Context $storageCtx

In this step I gather the data that will be written to the tale.  I used a simple get-process command that returns the top 10 processes by CPU usage.  The output from the command is added to the array.

# Step 3, get the data 
$processes = get-process | Sort-Object CPU -descending | select-object -first 10

The last step loops through the array, writing data into the table.

foreach ($process in $processes) {
 Add-StorageTableRow -table $table -partitionKey $partitionKey -rowKey ([guid]::NewGuid().tostring()) -property @{
 'Time' = $dateTime.ToString("yyyymmdd:hhmmss")
 'ProcessName' = $process.Name
 'ID' = $process.Id
 'CPUTime' = $process.TotalProcessorTime.Minutes
 'Memory' = $process.WS 
 } | Out-Null
}

Run the script and go back into Storage Explorer.  After refreshing the view, you should be able to see the data written into your Storage Table.

That’s all there is to it.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.