Home > Blockchain >  How to speed up the process in PowerShell
How to speed up the process in PowerShell

Time:10-20

I've a simple script that generate orphan OneDrive report. It's working but it's very slow so I'm just wondering how could I modify my script to speed up the process. Any help or suggestion would be really appreciated.

Basically this is what I'm doing:

  1. Get the owner email from "Owner" column and check it using AzureAD to see if I get any error
  2. If I get an error then check it in on-prem ADGroup
  3. If that owner is existed in the on-prem ADGroup then it's an orphan
  4. Export only that user to a new csv file
$ImportData = "E:\scripts\AllOneDriveUser.csv"
$Report = "E:\scripts\OrphanOneDrive.csv"

$CSVImport = Import-CSV $ImportData

ForEach ($CSVLine in $CSVImport) {
    $CSVOwner = $CSVLine.Owner

    try{
         Get-AzureADUser -ObjectId $CSVOwner 
    }catch{
        $StatusMessage = $_.Exception.Message
        if($Null -eq $StatusMessage){
            Write-Host "User Found, Ignore from orphan list."
        }else{
            #Owner not found in AzureAD
            $group  = 'TargetGroup'
            $filter = '(memberof={0})' -f (Get-ADGroup $group).DistinguishedName
            $filterName = Get-ADUser -LDAPFilter $filter

            $ModifiedOwner = $CSVOwner  -split"@"[0]

            if( $ModifiedOwner[0] -in $filterName.Name ){
                Write-host "Adding it into orphaned list"
                $CSVLine | Export-Csv $Report -Append -notypeinformation -force
            }else{
                Write-Host "Not orphaned"
            }
        }
    }
}

I have over 8000 record in my import csv file and over 5000 member in my on-prem AD group so it taking very long.

CodePudding user response:

You can greatly improve your script by using a HashSet<T> in this case, but also the main issue of your code is that you're querying the same group over and over, it should be outside the loop!

There is also the use of Export-Csv -Append, appending to a file per loop iteration is very slow, better to streamline the process with the pipelines so Export-Csv receives the objects and exports only once instead of opening and closing the FileStream everytime.

Hope the inline comments explain the logic you can follow to improve it.

$ImportData = "E:\scripts\AllOneDriveUser.csv"
$Report     = "E:\scripts\OrphanOneDrive.csv"

# we only need to query this once! outside the try \ catch
# a HashSet<T> enables for faster lookups,
# much faster than `-in` or `-contains`
$filter  = '(memberof={0})' -f (Get-ADGroup 'GroupName').DistinguishedName
$members = [Collections.Generic.HashSet[string]]::new(
    [string[]] (Get-ADUser -LDAPFilter $filter).UserPrincipalName,
    [System.StringComparer]::OrdinalIgnoreCase
)

Import-CSV $ImportData | ForEach-Object {
    # hitting multiple times a `catch` block is expensive,
    # better use `-Filter` here and an `if` condition
    $CSVOwner = $_.Owner
    if(Get-AzureADUser -Filter "userprincipalname eq '$CSVOwner'") {
        # we know this user exists in Azure, so go next user
        return
    }
    # here is for user not found in Azure
    # no need to split the UserPrincipalName, HashSet already has
    # a unique list of UserPrincipalNames
    if($hash.Contains($CSVOwner)) {
        # here is if the UPN exists as member of AD Group
        # so output this line
        $_
    }
} | Export-Csv $Report -NoTypeInformation
  • Related