Wednesday, July 21, 2021

Move-Mouse -- Keep your screen awake, session from dying, etc.

This was a fun project I did awhile ago.  I work in industries that typically have higher IT security standards.  However, much of that is bureaucracy and the org just wants to check a box -- as in this case.

The org I wrote this for had a security policy that required a screen idle timeout and session-logoff after 15 minutes.  That's good so that people don't stay logged into something in perpetuity; however, it becomes a pain in the neck for any process/operation that takes longer than 15 mins (e.g. hours).  

So when I offered to solve the problem, they got approval from their security team and I provided a mouse-mover in PowerShell.

This was an interesting exercise as it really forced me to get a better understanding of both screen positioning and actions that interact with Windows built-in idle-tracking.  

As an example of one of my failed experiments, I could not locate any .NET class that would interact with Windows idle-tracking.  It moved the mouse or entered keys successfully but still allowed the session to be terminated by the security policy.  

So I went a little deeper and decided to use Platform Invocation (P/Invoke) and while that did work to successfully move the mouse and do it in a way that also was Windows-idle-timeout-aware, it added a slightly new layer of complexity.  In the end, that didn't matter but it's worth noting.

System.Windows.Forms.Cursor represents a position that manipulates pixel position.

user32.dll::Mouse_Event.MoveTo() represents a point on a 65535 x 65535 grid called a 'mickey'. 

So... if you choose to emit the mouse position but set your $XY to something really small, you won't actually see the mouse move and the numbers reported to the screen won't reflect any change.

Again, this is because if you use the default of moving 1-mickey, the movement is too small to register from the .NET Cursor class.  

The logic employed oscillates the mouse back and forth.  Initial implementations didn't negate the previous movement so overtime, the mouse would drift into oblivion.  ;)

Examples:


## Move the mouse to keep the screen awake:
Move-Mouse -XY 1 -Secs 1 -LoopInfinite $true -DisplayPosition $true

## Move the mouse once:
Move-Mouse -XY 1 -Secs 1 -DisplayPosition $true

## Get frustrated!!
## Remember that CTRL-C is your friend.  ;)
Move-Mouse -XY 100 -Secs 1 -LoopInfinite $true -DisplayPosition $true


Function Move-Mouse {
param (
    ## Declare variables
    [uint16] $XY=1,  ## Mouse Position provided to both x and y axis
    [int32] $Secs = 5, ## Number of seconds to sleep between mouse movements when LoopInfinite is defined
    [boolean] $LoopInfinite = $false,  ## Determines whether to loop infinitely or not
    [boolean] $DisplayPosition = $false  ## Determines whether to write the mouse's pixel location to the screen.
)

begin {

    ## Use a .NET type defintion to access P/Invoke for the appropriate DLL and function.
    $typedef = @"
using System.Runtime.InteropServices;

namespace PoSh
{
    public static class Mouse
    {
        [DllImport("user32.dll")]
        static extern void mouse_event(int dwFlags, int dx, int dy, int dwData, int dwExtraInfo);

        private const int MOUSEEVENTF_MOVE = 0x0001;

        public static void MoveTo(int x, int y)
        {
            mouse_event(MOUSEEVENTF_MOVE, x, y, 0, 0);
        }
    }
}
"@
    ## Load the type defintion into memory
    Add-Type -TypeDefinition $typedef

}

process {

    ## Determine if we want to loop infinitely, default is false
    if ($LoopInfinite) {
        
        $i = 1
        while ($true) {
            ## Write the pixel location to screen
            if ($DisplayPosition) { Write-Host "$([System.Windows.Forms.Cursor]::Position.X),$([System.Windows.Forms.Cursor]::Position.Y)" }
            
            ## Use modulo to alternate the mouse movement back and forth, default is a relative 1,1 and then -1,-1
            if (($i % 2) -eq 0) {
                [PoSh.Mouse]::MoveTo($XY, $XY)
                $i++
            } else {
                [PoSh.Mouse]::MoveTo(-$XY, -$XY)
                $i--
            }

            Start-Sleep -Seconds $Secs
        }
    } else {
        if ($DisplayPosition) { Write-Host "$([System.Windows.Forms.Cursor]::Position.X),$([System.Windows.Forms.Cursor]::Position.Y)" }
    
        [PoSh.Mouse]::MoveTo($XY, $XY)
    }
}

}

Hackem Up! Disassembling Files into Chunks and Recombining Files from Chunks

So... I know I don't add much to my blog and for that, I apologize.   However, I had to do something recently that I thought my two subscribers might like.  ;)

I came across a situation where I needed to move many large files between disconnected environments and despite it being 2021, data transfer speeds can still be atrocious.

I was trying to move multiple 5GB-100GB disk backups from one state to another.  Aside from each copy being horrendously slow, when you experience a problem with an upload (e.g. timeout) at 45/100GB of a file and you lose that 8 hours, it can be rather frustrating.

However, if I chop the large files up into smaller fragments, and move (or sync) those fragments, I reduced the likelihood of running into a terminating condition or even, if I did, since each fragment is a small part of the whole, it wouldn't take as long to recover.

I tried to find a free tool or something that already existed but all of the tools I could locate either cost money or didn't do what I wanted to do, so I decided to write some code that would chop up the files myself.

And correspondingly, I needed to be able to recombine the fragments on the other side to have an identical file as was intended in the original source.  So without further adieu, here is Chunk-File and Recombine-File:

Examples:

## Split the file into fragments
Chunk-File -FileName somefile.ext -ChunkSize 1GB
Chunk-File -FileName somefile.ext
## Recombine the file; the recombined file will have a '_new' name on it
Recombine-File -PathToChunks 'some-directory-path'
Recombine-File

## Verify the bytes were written back in the correct order
Get-FileHash -Algorithm MD5 sourcefile, sourcefile_new

NOTE:

Recombine-File does *not* delete the chunks.  This was intentional.  If any exception gets thrown during the recombine effort, I wanted to provide a non-destructive means of being able to try again (without having to re-copy the fragments).

Chunk-File:

function Chunk-File {
param (
    [Parameter(Mandatory=$true)][System.String]$FileName,
    [Parameter(Mandatory=$false)][uint64]$ChunkSize
)

    try {
        ## Get a file object reference to the passed in filename.
        $File = Get-Item $FileName

        ## Open a filestream handle to the file
        $fs = New-Object System.IO.FileStream($File.FullName, [System.IO.FileMode]::Open)

        ## If a desired-size is not specified, automatically determine an appropriate chunk size
        if (-not($ChunkSize)) {
            if ($fs.length -gt 10GB) {    
                $ChunkSize = 10GB
            } elseif ($fs.length -gt 1GB) { 
                $ChunkSize = 1GB
            } elseif ($fs.length -gt 100MB) {                                
                $ChunkSize = 100MB 
            } elseif ($fs.length -gt 10MB) { 
                $ChunkSize = 10MB
            } elseif ($fs.length -gt 1MB) {                                
                $ChunkSize = 1MB 
            } elseif ($fs.length -gt 100KB) { 
                $ChunkSize = 100KB
            } elseif ($fs.length -gt 10KB) {                                
                $ChunkSize = 10KB 
            } elseif ($fs.length -gt 1KB) { 
                $ChunkSize = 1KB
            } else {
                $ChunkSize = 1
            }
        }
        
        ## Ensure the chunk size isn't larger than the filesize
        if ($ChunkSize -gt $fs.Length) {
            Write-Error "Chunk size should not be larger than the file size."
            break
        }

        ## Determine acceptable buffer size for speed/efficiency
        if ($fs.length -gt 1GB) {    
            $BufferSize = 1MB
        } elseif ($fs.length -gt 1MB) { 
            $BufferSize = 1KB
        } else {                                
            $BufferSize = 1 ## 1B buffer
        }

        #Write-Host "ChunkSize:  $ChunkSize"
        #Write-Host "BufferSize: $BufferSize"

        ## Set the first buffer size
        $buffer = New-Object byte[] ($BufferSize)
        
        ## Set some predefined parameters for use with the chunking
        $FileIncrement = 1
        $ZeroPadSize = ([int]($fs.Length / $ChunkSize)).ToString().Length + 1

        ## Set the auto-increment and auto-decrement values
        $BytesToRead = $fs.Length
        $BytesRead = 0

        ## Open a filestream handle to the first output fragment
        $cfs = New-Object System.IO.FileStream(("$($File.Directory)\$($File.BaseName)_$("$FileIncrement".PadLeft($ZeroPadSize, '0'))$($File.Extension)"), [System.IO.FileMode]::OpenOrCreate)

        ## Iterate through the source file to completion
        while ($BytesToRead -gt 0) {
            
            ## If the chunk file has reached the desired chunk size, close it and open the next chunk
            if ($BytesRead -gt 0 -and $BytesRead % $ChunkSize -eq 0) {
                $cfs.Dispose()
                $FileIncrement++
                $cfs = New-Object System.IO.FileStream(("$($File.Directory)\$($File.BaseName)_$("$FileIncrement".PadLeft($ZeroPadSize, '0'))$($File.Extension)"), [System.IO.FileMode]::OpenOrCreate)
            }
        
            ## Handle the case where the buffer is not a multiple of the file size.
            ## Without limiting the size of the final buffer, the last 'chunk' would be larger than it's supposed to be.
            ## The file would still be in tact and functional, but would contain a padding of zeroes at the end that would change a hash verification on the output file once it's recombined.
            if ($BytesToRead -lt $BufferSize) {
                $buffer = New-Object byte[] ($BytesToRead)
            } else {
                $buffer = New-Object byte[] ($BufferSize)
            }

            ## Read from the source
            [void]$fs.Read($buffer, 0, $buffer.Length)

            ## Write to the fragment
            $cfs.Write($buffer, 0, $buffer.Length)

            ## Increment/Decrement
            $BytesRead += $buffer.Length
            $BytesToRead -= $buffer.Length
        }
    } catch {
        $_
    } finally {
        $fs.Dispose()
        $cfs.Dispose()
    }
}

Recombine-File:


function Recombine-File {
param (
    [Parameter(Mandatory=$false)][System.String]$PathToChunks
)
    try {

        ## Get a collection of file fragments that match the naming convention from 'Chunk-File'
        ## If a path is not provided, the current directory is used
        if (-not($PathToChunks)) {
            $frags = Get-ChildItem | Where-Object { $_.BaseName -match '.*_[0-9]+$' }
        } else {
            $frags = Get-ChildItem $PathToChunks | Where-Object { $_.BaseName -match '.*_[0-9]+$' }
        }

        ## Ensure there are two-or-more fragments to recombine.
        if ($frags.Count -lt 2) {
            Write-Error "No chunks were found to recombine."
            break
        }
        
        ## Create a new file to write all of the fragmented data to
        $tfs = New-Object System.IO.FileStream(("$($frags[0].Directory)\$($frags[0].BaseName.Split('_')[0])_new$($frags[0].Extension)"), [System.IO.FileMode]::OpenOrCreate)

        ## Set an initial buffer
        $BufferSize = 1MB

        ## Iterate through each fragment to write to the new consolidated file
        $frags | ForEach-Object {
        
            ## Open a handle to the fragment
            $frag = New-Object System.IO.FileStream(("$($_.FullName)"), [System.IO.FileMode]::Open)

            ## Set the increment/decrement values for each fragment
            $BytesToRead = $frag.Length
            $BytesRead = 0

            ## Iterate over this fragment
            while ($BytesToRead -gt 0) {

                ## To ensure there's no extra data written to the consolidated file, adjust the buffer size for the final read
                if ($BytesToRead -lt $BufferSize) {
                    $buffer = New-Object byte[] ($BytesToRead)
                } else {
                    $buffer = New-Object byte[] ($BufferSize)
                }
                
                ## Read from the fragment
                [void]$frag.Read($buffer, 0, $buffer.Length)

                ## Write to the consolidated file
                $tfs.Write($buffer, 0, $buffer.Length)

                ## Increment/Decrement
                $BytesRead += $buffer.Length
                $BytesToRead -= $buffer.Length
            }

            $frag.Dispose()
        }

        Write-Output "Recombine successful:  $($tfs.Name)"
    } catch {
        $_
    } finally {
        $frag.Dispose()
        $tfs.Dispose()
    }
}