Friday, August 5, 2016

vRealize Orchestror, "Hello World!" and VM query

In first vRealize Orchestartor post, we connected Orchestrator to vCenter

Now it's time to create our first workflows.

With new tool, you always need to do a 'Hello World' first, so let's do it.

I have created a folder called 'vLAB', and under that, we create new workflow.



Give it a name 'Hello World'



In workflow editor, on Schema tab, drag'n'drop 'Scriptable task' to workflow, between start and end icons.



Click Pencil icon on top of 'Scriptable task' to edit it.


In script editor, on Scripting tab, write:
System.log("Hello World!");

And hit Close.



Now we can run our Workflow, by clicking 'Run' button, on Schema tab of Workflow editor.



And in Log tab, we can see output! We have successfully run our first workflow!


Cool! Now let's do something more productive. Let's get all VM's that we have in our VMware environment.

So, create new Workflow named 'Get All Virtual Machines' and add action 'getAllVMS', that is built-in action, and then add Scriptable Task, so it should look like this:



getAllVMs action will return all VMware Virtual Machines, so we need to set up Attribute (predefined variable) to our workflow. So, go to 'General' tab in workflow editor, and add new Attribute by clicking 'A+' icon



It will create attribute named 'att0'. It's a good idea to rename it, so click that name to rename it, we will name it 'AllVMs'



Then click on 'string', to change type of this Attribute. Set type to 'Array of...' and search 'VC:VirtualMachine'.



Now we need to direct values from 'getAllVMs' action to our attribute 'AllVMs'. So go back to 'Schema' tab, and hit Pencil icon on top of 'getAllVMs' action.

Go to 'Visual binding' tab, select 'actionResult' and 'drag' it to top of 'AllVMs', to connect it.



Next, edit 'Scriptable Task', and go to 'Visual Binding'.

Drag 'AllVMs to empty space in center of windows, at 'IN' side, so it should look like this:



Now we can use information inside 'AllVMs' in our Scriptable task.

Go to 'Script' tab, and write:
System.log(AllVMs);

(Do note, since this is javacript, almost everything is case sensitive....)



And now we can run our workflow, and we should get list of our VMs in 'Log' tab.



So, we have made our first two Workflows. 

In next posts, I'll show how to utilize that data, to do actions against those VMs.



Monday, July 4, 2016

Exporting documents from PasswordState

I have been using PasswordState for couple of years, and I have to say that I like it.

But there is one feature missing. Since you might want to take offline copy of everything that you have in your password management solution to some offsite place - in a form that does not require you to set-up your servers in case of disaster - you might need to copy out more than just your passwords.

Exporting passwords is easy, built-in feature. But exporting documents that you might have (like certificate files etc) is not that easy.

You have to use API-interface to download, and read file names from database. A bit tricky, and if you need to do that regularly, you need to have at least a (PowerShell) script for you to do that.

So here is what I did:

 
#SQL Functionality from:
#https://cmatskas.com/execute-sql-query-with-powershell/


Param(
 [string]$apikey,
 [string]$exportpath 
 )
 
if(-not$apikey) { Throw "Mandatory paramter Apikey not specified" }
if(-not$exportpath) { Throw "Mandatory paramter exportpath not specified" }
if(!(test-path $exportpath)){Throw "Exportpath " +$exportpath+ " does not exist"}

$ItemDocPath = $exportpath + "\PWItemDocuments"
$ListDocPath = $exportpath + "\PWListDocuments"


if(!(test-path $ItemDocPath)){New-Item -ItemType directory $ItemDocPath}
if(!(test-path $ListDocPath)){New-Item -ItemType directory $ListDocPath}



#ENVIRONMENT VARIABLES, MODIFY TO MATCH YOUR ENVIRONMENT
$Server = "yoursqlinstance"
$Database = "yourdatabase"
$PasswordStateSiteUrl = "https://yourpasswordstateurl/"
#ENVIRONMENT VARIABLES, MODIFY TO MATCH YOUR ENVIRONMENT

#DO NOT CHANGE THESE
$QueryListDocuments = $("SELECT * FROM ["+$Database+"].[dbo].[PasswordListDocuments]")
$QueryItemDocuments = $("SELECT * FROM ["+$Database+"].[dbo].[PasswordDocuments]")
$QueryPasswordLists = $("SELECT [PasswordListID],[PasswordList],[Description] FROM ["+$Database+"].[dbo].[PasswordLists]")
$QueryPasswords = $("SELECT [PasswordID],[Title],[PasswordListID] FROM ["+$Database+"].[dbo].[Passwords]")
#DO NOT CHANGE THESE

function ExecuteSqlQuery ($Server, $Database, $SQLQuery) {
    $Datatable = New-Object System.Data.DataTable
    $Connection = New-Object System.Data.SQLClient.SQLConnection
    $Connection.ConnectionString = "server='$Server';database='$Database';trusted_connection=true;"
    $Connection.Open()
    $Command = New-Object System.Data.SQLClient.SQLCommand
    $Command.Connection = $Connection
    $Command.CommandText = $SQLQuery
    $Reader = $Command.ExecuteReader()
    $Datatable.Load($Reader)
    $Connection.Close()
    
    return $Datatable
}

$PasswordLists = New-Object System.Data.DataTable
$PasswordLists = ExecuteSqlQuery $Server $Database $QueryPasswordLists

$Passwords = New-Object System.Data.DataTable
$Passwords = ExecuteSqlQuery $Server $Database $QueryPasswords

$ListDocuments = New-Object System.Data.DataTable
$ListDocuments = ExecuteSqlQuery $Server $Database $QueryListDocuments

$ItemDocuments = New-Object System.Data.DataTable
$ItemDocuments = ExecuteSqlQuery $Server $Database $QueryItemDocuments



foreach ($document in $ListDocuments) {
 $command = 'curl ' + $PasswordStateSiteUrl+'api/document/passwordlist/"'+$document.DocumentID+'"?apikey='+$apikey+' -OutFile "'+$ListDocPath+'\'+$document.DocumentID+'_'+$document.DocumentName+'"'
 invoke-expression $command

 $temp = $passwordlists|Where-Object {$_.PasswordListID -eq $document.PasswordListID}
 $pwlistname = $temp.PasswordList
 
 [array]$PWListDocuments += New-Object -TypeName PSObject -Property @{
   "OriginalFileName" = $document.DocumentName
   "DocumentID" = $document.DocumentID
   "Description" = $document.DocumentDescription
   "PasswordList" = $pwlistname
   "Modified" = $document.Modified
   "ModifiedBy" = $document.ModifiedBy
  }
 
 
 }

 
foreach ($document in $ItemDocuments) {
 $command = 'curl ' + $PasswordStateSiteUrl+'api/document/password/"'+$document.DocumentID+'"?apikey='+$apikey+' -OutFile "'+$ItemDocPath+'\'+$document.DocumentID+'_'+$document.DocumentName+'"'
 invoke-expression $command
 
 $temp = $Passwords|Where-Object {$_.PasswordID -eq $document.PasswordID}
 $PasswordTitle = $temp.Title
 $PasswordListID = $temp.PasswordListID
 
 $temp = $passwordlists|Where-Object {$_.PasswordListID -eq $PasswordListID}
 $pwlistname = $temp.PasswordList
 
 
 [array]$PWDocuments += New-Object -TypeName PSObject -Property @{
   "OriginalFileName" = $document.DocumentName
   "DocumentID" = $document.DocumentID
   "Description" = $document.DocumentDescription
   "PasswordItem" = $PasswordTitle
   "PasswordList" = $pwlistname
   "Modified" = $document.Modified
   "ModifiedBy" = $document.ModifiedBy
   
   
   
   
  }
 
 }

 
if ($PWListDocuments){
 $outputfile = $exportpath+'\PWListDocuments.csv'
 $PWListDocuments|Export-CSV -useCulture -NoType -Encoding UTF8 $outputfile
 }

 if ($PWDocuments)
 {
  $outputfile = $exportpath+'\PWItemDocuments.csv'
  $PWDocuments|Export-CSV -useCulture -NoType -Encoding UTF8 $outputfile
 }



So, this script does following things when you run it:

  • Needs two parameters:
    • apikey: you need to have a system wide API-key in your PasswordState
    • exportpath: root path, where documents etc are exported
      • this path needs to exist
  • It creates two documents under exportpath, 'PWItemDocuments' and 'PWListDocuments'.
  • Does four queries to your PasswordState database to read information that we need
  • Exports all documents that are attached in password lists to 'PWListDocumnts' folder (with ID-number as a prefix)
  • Exports all documents that are attached in password items to 'PWItemDocuments' folder (with ID-number as a prefix)
  • Creates two .csv files to exportpath, with following info:
    • PWListDocuments.csv
      • Original filename
      • Document ID in PasswordState
      • Description of document
      • Password list name where document is attached to
      • Modified date
      • Modifier
    • PWItemDocuments.csv
      • Original filename
      • Document ID in PasswordState
      • Description of document
      • Password item name where document is attached to
      • Password list name where that item is
      • Modified date
      • Modifier
Reason that I'm adding document ID as a prefix, is that you might have duplicate filenames in your attachments, and this way we can get all of those exported.


Sunday, June 19, 2016

vRealize Orchestrator, connecting to vCenter

I have been working with vRealize Orchestrator for almost a year now, and I think that it's a hidden jewel in VMware stack. But I have to admit, it's not easiest tool to start playing with, so I'm planning to write about some things that I have done with it, starting with very basic stuff, and maybe at some point some more complicated (and more useful) stuff.

One of the first things to do, is to add your vCenter server to your Orchestrator. In Orchestrator, go to a workflow: "Library / vCenter / Configuration / Add a vCenter Server instance".  Right click workflow and select "Start workflow..."

I have vCenter appliance in address vcsa01.vlab.dom, so I add that to first field. Then I change last option (Ignore certificate warnings) to Yes and click next



On next page, we set-up account to be used when connecting to vCenter. I'm using my vCenter servers SSO-domain, so account is vlabsso.dom\administrator, you could also use AD service account dedicated for this. I also chose to use this account always when using orchestrator, so answer to first question is 'No'



After submitting this worklow, if everything goes well, we have added our vCenter to Orchestrator. You can add multiple vCenters to one Orchestrator.

To see that vCenter was added succesfully, go to inventory tab, and see that you can browse your vCenter environment.



Now we have succesfully added vCenter to our Orchestrator.

Next post: first workflows

Thursday, June 16, 2016

Sysinternals 20yo party at Helsinki / Finland

This is mostly my personal notebook for today, but maybe you might be interested as well...

Sysinternals share: \\live.sysinternals.com\Tools\

Like almost always with video meetings, we start the day with debugging Skype meeting. We got good live experience, on how not to start a video conference: in a hurry.

After about 10 minutes, we finally are hearing Mark Russinovich giving welcome speech, but it ended too early, since we lost audio again...



After that hassle, we luckily have rest of the speakers on site. Next up, Aaron Margosis, giving opening speech.

Some examples of Sysinternals influence on Windows ecosystem


Tricks & tricks:
use '-ct | clip' -> it will format output to tab delimited and copies it to clipboard -> paste to excel
-nobanner switch is coming on future versions





PsExec -sid -> run as local system and don't wait for command to finish

du -> quite familiar from *nix, has nice features

streams -d * -> unblock files 'downloaded'


Now some security with Paula Januszkiewicz, maybe most interesting session, only small parts written here..



Logs all process activity to dedicated event log branch



Parsing logfiles



Extract hashes with takdefence pyhton tool (http://www.tekdefense.com/tekcollect/)


Check those hashes against https://www.virustotal.com/ database with API (tool: http://www.woanware.co.uk/forensics/virustotalchecker.html)




Configuring:









And lot's of cool demos of using sysmon !






https://github.com/gentilkiwi/mimikatz

After the lunch, Tim Mangan about Process Monitor and problem debugging on AppV

"Apps suck" - I love this guy already.

So, it's a lot ot App-V related stuff, so not in very high on my interest list, but still interesting to listen. Not much notes though.





Process monitor, use filter (ex. filter safe items etc), use highlight (ex. highlight result 'Success' and look for something that is not highlighted), save to PML with all data...


Short brake, then, Brian Catlin and Process Explorer.











Virtual vs. Physical address space





Daniel Pearson - LiveKd, ProcDump & NotMyFault



Whee, no powerpoints, only live demos

Procdump, "ProcDump is a command-line utility whose primary purpose is monitoring an application for CPU spikes and generating crash dumps during a spike that an administrator or developer can use to determine the cause of the spike."

Example, procdump notepad -> does .dmp of notepad process
To delay and do multiple dumps, procdump -s 5 -n 3
-c 25 -> take dump when CPU value is 25%, if you add -s 5, it has to be for 5 seconds of that CPU usage, can be used to monitor CPU spike and dump situation of that spike.
get examples, -? -e
procdump -e 1 -f "" -> filter for exceptions (monitor them all, does not dump)

You can use WinDBG to debug dump files.

LiveKD

https://technet.microsoft.com/en-us/sysinternals/livekd.aspx

You can "attach" to live windows

NotMyFault (my all time favorite tool)

Shortly, you can crash (cause BSOD), in multiple ways with this tool.



"How it's done in Enterprise environment" - Petri Paavola

- There is usually only one, or at best couple of guys who can really do troubleshooting.
- Overloads that dude

In client environment, you can use procmon remotely with PsExec -> get trace of error to experts.



Network tracing, use netsh since it's built-in.

netsh trace start ....

In client environment -> use PsExec to run it remotely.

Then use netmon / message analyzer to analyze that trace.

You can also use ProcMon to monitor network traffic.

WPA, Windows Performance Analyzer -> analyze slow boot times.


Mikko Järvinen - Troubleshooting extravaganza

Stories about client troubleshooting.. no notes, but nice stories :)



Nice day, most sessions were really deep dives, it's easy to understand that those guys are MVP's



Sunday, March 27, 2016

Migrating VMs with Veeam Backup & Replication

I have been involved in a project, where we need to migrate hundreds of VMware VM's from one datacenter to another.

When thinking, that how we should do that migration, we decided to test if Veeam Replication functionality would be good for us. And it is. In our case, downtime for servers outside office hours, is mainly not an issue, so it did this even easier.

Why did we end up using Veeam?

  • We are already using it as our backup solution, and also doing replication for DR purposes
  • We have limited network capacity between datacenters, and Veeam compresses data while doing replication
  • Network labels and subnets are different on datacenters, and Veeam can do IP address re-map, and also change VM's network, while doing replication, so it makes it faster to bring those servers back online at new DC.

How are we doing it?

We are having migration script, that does following steps

  • Reads list of VMs that are in specific Veeam Replication job, used in migrations
  • Shuts down those servers in old datacenter (tries to do it gently, but if it does not succeed, does brutal poweroff, none of which have happened so far)
  • Changes some settings on those VM's before migration
  • Starts Veeam Replication job to move servers to new datacenter
And when that is done, we manually do failover in Veeam, to start servers in new DC.

One notable thing about replication in our case. It would have been possible (and was our original idea), to migrate most of the data in beforehand, and when doing actual migration, just replicate changed data, and make actual migration time really short. Only problem is, that on old environment we noticed that CBT is corrupted on some machines, and it forced us to disable CBT on migration job.

Here is the script that we are using. It needs to run on Veeam Backup&Replication server, since we have not yet upgraded to v9.

This script can be run as a Scheduled task, so you can build your Replication job to be ready, and let this script do the work at night.

#Change these settings
$logpath = "c:\scripts\migration_log\"
$vCenter = "oldvcenter.vmware.local"
$vCenterUser "vmware\administrator"
$vCenterPass "P@ssw0rd"
$ReplicaJob = "VeeamReplicaJobName"

#Make sure that VMware snapins are loaded
if ( (Get-PSSnapin -Name VMware* -ErrorAction SilentlyContinue) -eq $null )
{
    Add-PsSnapin VMware*
}
#Add Veeam functions
& 'C:\Program Files\Veeam\Backup and Replication\Backup\Install-VeeamToolkit.ps1'


#This fuction is grabbed from: http://ict-freak.nl/2009/10/05/powercli-enabledisable-the-vm-hot-add-features/
Function Enable-vCpuHotAdd($vm){
    $vmview = Get-vm $vm | Get-View 
    $vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
    $extra = New-Object VMware.Vim.optionvalue
    $extra.Key="vcpu.hotadd"
    $extra.Value="true"
    $vmConfigSpec.extraconfig += $extra
    $vmview.ReconfigVM($vmConfigSpec)
}

#This fuction is grabbed from: http://ict-freak.nl/2009/10/05/powercli-enabledisable-the-vm-hot-add-features/
Function Enable-MemHotAdd($vm){
    $vmview = Get-vm $vm | Get-View 
    $vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
    $extra = New-Object VMware.Vim.optionvalue
    $extra.Key="mem.hotadd"
    $extra.Value="true"
    $vmConfigSpec.extraconfig += $extra
    $vmview.ReconfigVM($vmConfigSpec)
}

#https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010048
Function Enable-AutoUpdate($vm){
 $vmview = $vm | Get-View
 $vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec
 $vmConfigSpec.Tools = New-Object VMware.Vim.ToolsConfigInfo
 $vmConfigSpec.Tools.ToolsUpgradePolicy = "UpgradeAtPowerCycle"
 $vmview.ReconfigVM($vmConfigSpec)
}

#Starts logging of all activities in this script
$timestamp = get-date -Format yyyy.dd.MM.HH.mm.ss
$logfile = $logpath + $timestamp +".log"
start-transcript -path $logfile -append

#Connects to vCenter server
Connect-VIServer $vCenter -User $vCenterUser -Password $vCenterPass

#Gets list of VMs from replication job in Veeam 
$VMlist = Get-VBRJob -Name $ReplicaJob|Get-VBRJobObject|Where-Object {$_.type -like "Include"}|Select Name

write "VMs in replica job"
write $VMlist

# shutdown logic idea from: http://www.virtu-al.net/2010/01/06/powercli-shutdown-your-virtual-infrastructure/
$VMs = get-vm $VMlist.name
foreach($VM in $VMs)
{
 write "Shutting down VM " $VM.name
 $VM | Shutdown-VMGuest -Confirm:$false
}

#Set the amount of time to wait before assuming the remaining powered on guests are stuck 
$waittime = 1800 #Seconds

$Time = (Get-Date).TimeofDay
do {
    #Wait for the VMs to be Shutdown cleanly
    sleep 1.0
    $timeleft = $waittime - ($Newtime.seconds)
    $numvms = (Get-VM $VMlist.name | Where { $_.PowerState -eq "poweredOn" }).Count
    Write "Waiting for shutdown of $numvms VMs or until $timeleft seconds"
    $Newtime = (Get-Date).TimeofDay - $Time
    } 
 until ((@(Get-VM $VMlist.name | Where { $_.PowerState -eq "poweredOn" }).Count) -eq 0 -or ($Newtime).Seconds -ge $waittime)
 
 
$ForceShutdown = Get-VM $VMlist.name | Where { $_.PowerState -eq "poweredOn" }
 
if ($ForceShutdown) {
 Write "Starting forced poweroff for following servers"
 write $ForceShutdown
 Stop-VM $ForceShutdown -Confirm:$false
}

#Enable Memory hotadd, cpu hotadd and VMware tools autoupdate 
foreach($VM in $VMs)
{
 write "Change settings on VM: $VM"
 Enable-MemHotAdd $VM
 Enable-vCpuHotAdd $VM
 Enable-AutoUpdate $VM
}

#Finally start replication job
Start-VBRJob -Job $ReplicaJob


Monday, February 22, 2016

Impressions on Tintri T880

Whoa, It's been months since I last updated my blog. And, our migration project was delayed by many reasons, but it's now going on full speed, so finally I have some experiences on how Tintri performs on real life.

So, we now have about 240 virtual machines running on our T880. And this is how it looks on overall level. Of course, this is just a short sneak, but it looks quite good.


But to be honest, it really should look like that. As you can see in T880 specs, it has 8,8 TB of flash. So, almost all the data still fit's in to that flash.

So how much data do we have in out Tintri now?

Space allocated to VMs: ~37TB
Since all is thin provisioned, logical space consumption is 19TB
And when compression of Tintri is in action, real used space is 9,7TB.

That's actually pretty nice, right?

Space savings



What about latency and flash hit ratio, one points that Tintri uses on their marketing quite heavily. Well, as said, there is only 9.7TB of actual data, so flash hit ratio should be quite good, right? And, it has been, when checking it, it usually always is between 98-100%. But on some occasions, it can be way less, here is a screenshot of last 7 days.

Flash hit ratio

But, what happened? Tintri does autotiering, and keeps hot blocks on flash. And also everything is written first to SSD, then to HDD. And what happened on that timeframe, flash hit ratio was ~50% at it's worst?

What about lantency? Tintri says that it should stay under 1 ms. And, almost all the time it does, but let's have a look of out 7-day graph. Blue is storage latency, so you can see that latency is not caused by hosts or network.



What happened, did Tintri fail?

Well, these two pictures explains a little, first we have IOPS graph (blue is write, yellow is read):

IOPS
And then we have throughput graphs:


So we can see really heavy activity.

And reason for this all is: Full backups. On normal activity, latency and flash hit ratio stays on really good level, but it's quite understandable, that while running full backup, data must be read from HDD, and latency starts to look similar that we see traditional storage systems with HDD disks.

So, have we been happy with this? Yes, we have had no problems since day one.

And our users? Well, we have been migrating servers from really old legacy environment, with EOL hosts and and years old, mid range (actually still quite well performing) storage system.

So when VM's get to new hosts and run on top of Tintri, it's of course a huge improvement on performance, and all the feedback that we have been getting from our users, has been positive.

When we get our migrations done (VM count will probably be something between 500-600), and we have even more load, I'll update these statistics. I'm quite curious to see that will Tintri handle the load, as it should. So far, it has done all that it has promised.