PowerShell - The Ultimate Beginners Guide To Learn PowerShell Step-by-Step PDF
PowerShell - The Ultimate Beginners Guide To Learn PowerShell Step-by-Step PDF
step-by-step
Mark Reed
© Copyright 2020 - All rights reserved.
It is not legal to reproduce, duplicate, or transmit any part of this document
in either electronic means or in printed format. Recording of this
publication is strictly prohibited and any storage of this document is not
allowed unless with written permission from the publisher except for the
use of brief quotations in a book review.
Table of Contents
Introduction
Chapter One: PowerShell Components
Chapter Two: Commands
Chapter Three: Variables
Chapter Four: Pipelines
Chapter Five: Functions
Chapter Six: Modules and Providers
Chapter Seven: Scripting
Chapter Eight: Remoting and PowerShell Gallery
Answers to Exercises
Final Words
Introduction
Computer system administration can be difficult, especially when you
deal with large or complex computing systems. Even in the case where
systems are smaller and less complex, routine maintenance can also
represent another challenge. PowerShell is a system administration and
configuration technology that is designed specifically to address these
challenges.
Learning a new technology within the fast and vast field of computing
system administration can be daunting and very difficult, especially when
involved technologies are both fast-changing and difficult to grasp. In these
situations, it is useful to have a comprehensive guide that can help you learn
the most important components of the technology in question quickly. In
addition, the guide should summarize the learning items into a compact set
that can help you translate the technology into everyday use.
This is precisely what this book is for PowerShell 7.0 does for
beginners. This book will provide you, as the reader, with a comprehensive
foundation to a new and exciting technology in the form of PowerShell 7.0.
This book has the crisp rigor of computer programming presented in a
compact manner while maintaining the full comprehensive scope. The ideas
presented here capture the best system administration and configuration
technologies from the fast-changing landscape of the Microsoft .NET
platform. These ideas are presented to you in a manner that is easy to learn
so that you can quickly put them into practical use.
After reading this book, you can begin to use the most important
technologies in PowerShell 7.0 right away. This will not only improve how
you manage computing systems in the present computing landscape, but
also how you will adapt to future developments.
PowerShell is a part of some of the most important Microsoft
certifications ("Online IT Training Videos, IT Certification Training | CBT
Nuggets", 2019), including Microsoft Certified Solutions Associate:
Windows Server 2016, Microsoft Certified Solutions Expert: Productivity,
Microsoft Certified: Azure Administrator and MCSE: Mobility.
There are nine categories of scripting languages and roughly 192
languages in total ("Scripting Languages", 2018), with each language
evolving in line with the fast-paced developments in the field of computer
software. PowerShell became open-source in 2016, thus laying the
foundation for the language to evolve at an extremely fast pace. This book
will equip you with the skills to not only quickly go from a beginner to an
intermediate in one of these languages but also to put the language to
practical use very quickly.
PowerShell 7.0 utilizes the latest developments in Microsoft .NET
platform stack architecture in the form of the Standard 2.0 library and the
.NET Core 3.1 stack ("Announcing The Powershell 7.0 Release Candidate |
Powershell", 2019). This book will help you learn how to put to practical
use the most important components of these developments within
PowerShell 7.0 instantly. Not only will you begin creating custom
administration and configuration tools for your computing environments,
but you will be able to do so for future PowerShell versions as well.
PowerShell 7.0 can be utilized in various operating system
environments (Windows, Linux, and macOS environments), both in local
environments and in the cloud. This book will effortlessly equip you with
the compact set of requisite knowledge to utilize PowerShell 7.0 in each of
these environments in no time.
I can promise that with this book, you will be automating repetitive
tasks, running custom encryption protocols, and overseeing large-scale
management over complex computing systems effortlessly in little to no
time at all. In addition, you will be able to use many of the solutions to real-
world problems outlined in this book in your day-to-day tasks (Arntzen,
2019).
It is imperative to remember, however, that any computing system,
even a simple example such as your laptop, will continue to grow more
complex and difficult to manage with time. The solution to most of your
problems may be outlined in the next pages.
Chapter One:
PowerShell Components
In this chapter we will explore the components of PowerShell 7.0. The
components of PowerShell are the user interface (UI) with which users interact
with PowerShell as well as related software that houses PowerShell
components used to enhance the user interface. The related software comprises
Visual Studio Code (VS Code) and the PowerShell Gallery. VS Code is a
script editor that houses PowerShell integrated scripting environment features.
The PowerShell Gallery is a PowerShell content central repository which can
be accessed from PowerShell. In this chapter we will also go through the steps
of downloading and installing the first two components (namely, PowerShell
and VS Code). The installation instructions assume that you have an internet
connection.
PowerShell 7.0 is a GitHub project that was made open source in 2016,
and this chapter will look at how to source material available on GitHub when
starting PowerShell 7.0. The main PowerShell 7.0 installation method
described for the Windows and Linux operating systems uses the .NET Core
SDK (Software Development Kit). In the case of macOS, the main method is
by direct download, but the .NET Core SDK method will also be described. In
the case of ARM, (Acorn RISC Machine) devices, a Windows method and a
Linux method will be outlined.
Installing PowerShell 7.0 on Windows
The simplest method to install PowerShell 7.0 on Windows is to use the
.NET Core SDK ("Installing Powershell Core On Windows - Powershell",
2019).
The first step is to have .NET Core SDK installed. This is done by
downloading an installer for your Computer Processing Unit (CPU), namely,
(x64 or x86) from https://dotnet.microsoft.com/download/dotnet-core/3.1
("Install .NET Core SDK On Windows, Linux, And macOS - .NET Core",
2019).
The second step is to run the installer and follow the Wizard prompts. The
third step is to check that everything installed properly by opening a command
prompt and running this command (".NET Tutorial | Hello World In 10
Minutes", 2019):
>dotnet
If “dotnet” results in a dotnet help file printout, then the installation ran
successfully. If the command results in an error command stating that dotnet is
not recognized as a command, then try re-opening the command prompt.
The fourth step is to run this command to install PowerShell as a .NET
Global tool ("Installing Powershell Core On Windows - Powershell", 2019):
>dotnet tool install --global PowerShell
The remainder of the methods can be found at:
https://docs.microsoft.com/en-us/powershell/scripting/install/installing-
powershell-core-on-windows?view=powershell-7
Installing PowerShell 7.0 on Linux
PowerShell 7.0 support is available for Ubuntu (16.04, 18.04, 18.10 and
19.04), Debian (8.9.10), CentOS 7, Fedora (27,28), Red Hat Enterprise Linux
(RHEL) 7, openSUSE (42.3, LEAP 15) and Arch Linux ("Installing
Powershell Core On Linux - Powershell", 2019). The simplest method to
install PowerShell 7.0 on Linux is to use the .NET Core SDK ("Installing
Powershell Core On Linux - Powershell", 2019).
The first step is to install the .NET Core SDK. This is done by
downloading a binary for your Linux distribution from
https://dotnet.microsoft.com/download/dotnet-core/3.1. Then issue the
following commands on the terminal ("Installing Powershell Core On Linux -
Powershell", 2019):
$mkdir -p $HOME/dotnet && tar zxf dotnet-sdk-3.1.100-linux-x64.tar.gz
-C $HOME/dotnet
$export DOTNET_ROOT=$HOME/dotnet
$export PATH=$PATH:$HOME/dotnet
The preceding exports in the last commands are temporary. In other
words, the .NET Core Command Line Interface (CLI) features are available
only for that session. However, the commands can be made permanent by first
editing the shell profile. In different Linux shells, the following profiles can be
edited ("Installing Powershell Core On Linux - Powershell", 2019):
● Bash Shell: the ~/.bash_profile and ~/.bashrc
● Korn Shell: ~/.kshrc or .profile
● Z Shell: the ~/.zshrc or .zprofile
Then complete the update by updating the shell source file by adding
$HOME/dotnet at the end of the PATH statement. If there is no PATH
statement in the file, then add the following new line:
export PATH=$PATH:$HOME/dotnet
and
DOTNET_ROOT=$HOME/dotnet
at the end of the file.
The third step is to check that everything installed properly by opening a
command prompt and running this command (".NET Tutorial | Hello World In
10 Minutes", 2019):
$dotnet
If the script results in a dotnet help file printout, then the installation ran
successfully. If the command results in an error command stating that dotnet is
not recognized as a command, then try re-opening the command prompt. The
next step is to run this command to install PowerShell as a .NET Global tool
("Installing Powershell Core On Linux - Powershell", 2019):
$ dotnet tool install --global PowerShell
The preceding steps are used across all Linux distributions. The following
link describes the installation steps for each distribution and version supported
by PowerShell 7.0. The Linux distribution-specific methods can be found at
https://docs.microsoft.com/en-us/powershell/scripting/install/installing-
powershell-core-on-linux?view=powershell-7#ubuntu-1810
Installing PowerShell 7.0 on macOS
On macOS, the simplest method is to install PowerShell 7.0.0 directly
from the release page by downloading the macOS package from ("Installing
Powershell Core On Macos - Powershell", 2019)
https://github.com/PowerShell/PowerShell/releases/latest .
Once PowerShell is installed, issue the following command ("Installing
Powershell Core On Macos - Powershell", 2019):
$pwsh
The next best method is to install PowerShell using the .NET Core SDK
("Installing Powershell Core On macOS - Powershell", 2019). The first step is
to download the installer from ("Install .NET Core SDK On Windows, Linux,
And Macos - .NET Core", 2019)
https://dotnet.microsoft.com/download/dotnet-core/3.1 .
Then use the macOS standalone installer command prompts to install the
.NET Core 3.1 SDK. The third step is to check that everything installed
properly by opening a command prompt and running the following command
(".NET Tutorial | Hello World In 10 Minutes", 2019):
$dotnet
If it results in a dotnet help file printout, then the installation ran
successfully. If the command results in an error command stating that dotnet is
not recognized as a command, try re-opening the command prompt.
The fourth step is to install PowerShell as a .NET Global tool ("Installing
Powershell Core On Macos - Powershell", 2019) with the following command:
$dotnet tool install --global PowerShell
The remainder of the methods for installing PowerShell on macOS can be
found at https://docs.microsoft.com/en-
us/powershell/scripting/install/installing-powershell-core-on-macos?
view=powershell-7
Installing PowerShell 7.0.0 on ARM
PowerShell 7.0 can be used on a limited set of ARM devices, which
include Windows 10 ARM32/ARM64 as well as Raspbian. PowerShell 7.0 for
ARM is an unsupported experimental release. In this section we will consider
the installation instructions for Raspbian and Windows 10 Internet of Things
(IoT).
Deployment on Windows IoT
The installation instructions for deploying on Windows IoT are as follows
("Installing Powershell Core On Windows - Powershell", 2019):
Windows IoT has Windows PowerShell pre-installed, which can be used
in the installation of PowerShell 7.0.0-rc.1.
PowerShell Gallery
The PowerShell Gallery is a package repository containing modules,
scripts, and Desired State Configuration (DSC) resources ("Get Started With
The Powershell Gallery - Powershell", 2019). These are available for
download and leverage. The PowerShellGet module has cmdlets that can be
used to explore and install these packages. Cmdlets will be covered in Chapter
Two: Commands. A PowerShell Gallery item download does not require
signing into the gallery. The PowerShell Gallery will be covered in Chapter
Eight: Remoting.
The PowerShell Gallery Search control can be used to search for
packages on the homepage (https://www.powershellgallery.com/ ) or to search
through scripts and modules in the packages webpage
(https://www.powershellgallery.com/packages ).
As we will consider in Chapter Eight, PowerShell Gallery packages can
be obtained using the Find-Module, Find-Script and Find-DscResource
cmdlets, depending on the package type, with parameter -Repository
PSGallery. Result filtering can use the following parameter list: AllVersions,
Command, DscResource, Filter, Includes, MinimumVersion, Name, Tag, and
RequiredVersion.
The Find-Dscresource cmdlet can be used for discovering DSC resources.
The cmdlet returns information on Gallery DSC resources. DSC resources are
packaged in a module, and thus after the resource is found, it can be installed
using the Install-Module cmdlet.
Exercises
1.1. What are the main methods to run the PowerShell Console?
1.2. What are the three VS Code extensions considered in the Chapter?
1.3. Is it possible to run the PowerShell Console from within VS Code? If so,
how?
1.4. What PowerShell module can be used to install packages from the
PowerShell Gallery?
1.5. Why is the PowerShell Gallery useful when using PowerShell 7.0?
1.6. How do you install the PowerShell extension from within VS Code using
a .vsix file?
Chapter Summary
The key points of the chapter were:
● PowerShell 7.0 has operating system and environment specific
installation procedures.
● The .NET Core SDK based installation process is the simplest
across all supported platforms.
● The main components of PowerShell 7.0 are the PowerShell
Console, Visual Studio Code, and the PowerShell Gallery.
● The Console and Visual Studio Code components can be linked
using the Visual Studio Code PowerShell extension.
● The main components of PowerShell constitute the GUI with
which users interact with PowerShell.
● The PowerShellGallery component houses a lot of PowerShell
features that can be used to enhance the basic installation when the
setting requires so.
In the next chapter you will learn about PowerShell commands and how
they are issued in PowerShell when using various components in your
computing environment activities.
Chapter Two:
Commands
A command is a task-specific instruction to a computer program
("Command (Computing)", 2019). A cmdlet is a minimal version of a
command utilized within PowerShell ("Cmdlet Overview - Powershell",
2019). In PowerShell, cmdlets can be invoked at PowerShell runtime as part
of automation scripts issued on the PowerShell console or PowerShell
Application Programming Interface (API) programs.
PowerShell modules, which will be covered in Chapter Six, provide a
way to package cmdlets into groups based on function ("Powershell
Scripting - Powershell", 2019). Table 1 below describes these modules (or
groups) and their functions.
The three modules or cmdlet groups that immediately stand out are the
Microsoft.PowerShell.Core, Microsoft.PowerShell. Utility, and
Microsoft.PowerShell.Host modules. The first two module groups deal with
the core functions and basic features of PowerShell, respectively. The third
module deals with logging records. The next two module groups to consider
are Microsoft.PowerShell.Security and Microsoft.
PowerShell.Management. The activities associated with these two module
groups can be paired with the description of the
Microsoft.PowerShell.Diagnostics.
The activities associated with module groups
Microsoft.PowerShell.Diagnostics, PSDesiredStateConfiguration, and
PSDiagnostics can be easily grouped together. The activities of
Microsoft.WSMan.management and CIMcmdlets can also be grouped
together.
The activities of PSDesiredStateConfiguration, ThreadJob,
PowerShellGet, PackageManagement, and Microsoft.PowerShell.Archive
could be grouped together in some settings. The PSReadLine module
contains cmdlets that help with command-line editing analogous to script
editor support in scripting.
It is worth highlighting , however, that archiving is also closely related
to the activity associated with Microsoft.PowerShell.Management,
depending on whether the user is creating or opening the archives.
In order to better contextualize the use of cmdlets, it is imperative to
approach cmdlets from the perspective of a computing systems
administrator. A system administrator controls the maintenance, reliable
functioning, and configuration of computing systems within multi-user
computing system environments such as computer clusters ("System
Administrator", 2019). A system administrator safeguards the security,
resources, uptime, and performance of the computing systems he or she
oversees while meeting user needs according to a prescribed budget.
System administrators typically perform the following tasks:
● User account management
● Operating system management
● System log analysis
● Systems analysis and potential issue identification
● Safeguarding network infrastructure uptime
● New software and hardware installation
● Safeguarding security
● File system management
● Computer system physical environment management and planning
● Safeguarding computer environment (production, test, and
development) parity
● Documenting system configuration
● Problem troubleshooting
● User training
● Setting system performance
● Technical support and user assistance
In a PowerShell introduction it is imperative to keep in mind the
activities of a system administrator and the PowerShell tools available in
the form of the cmdlets arranged into modules available to perform the
tasks of a system administrator.
The next step is to look at the technical components of cmdlets. Please
note that this part of a PowerShell introduction may seem difficult at first.
Please make sure to work through this section many times. Once you fully
understand this section, most other aspects of PowerShell will be relatively
easy to understand.
The first step in understanding cmdlets is to understand the cmdlet
structure.
In the following section, please keep in mind the concept of a C# class
object and also refer to the example at the end of the section to identify the
component described in the structure. The purpose of this section is to
outline the structure rather than emphasize on its specifics. It is not
necessary to know all the .NET classes, protocols, and APIs, but it is
important to have an understanding of the objects that the cmdlet interacts
with when performing its function.
Cmdlet Structure
In simple terms, cmdlets are objects that perform an action or method
and yield a .NET Framework object to a following command in a pipeline.
Pipelines will be covered in detail in the next chapter. A written cmdlet
class is a derived cmdlet class implementation of one of two base classes
(Microsoft Doc 2019). These two base classes, in turn, are specialized
cmdlet classes.
The derived class is required to:
● Include a declaration of an attribute giving it the identity of a
cmdlet.
● Have public property definitions that can be recognized with
attributes used to identify the defined properties to be cmdlet
parameters.
● Override at least one method for processing input in record-
processing.
After implementing the derived class, the assembly containing the
class must be directly loaded into the PowerShell environment either by
using the Import-Module cmdlet or by creating a host application to load
the assembly through the
System.Management.Automation.Runspaces.Initialsessionstate API. These
two approaches provide console and program access to cmdlet functionality
in PowerShell.
This functional description of the cmdlet class allows us to characterize
the cmdlet structure through a set of cmdlet terms.
Cmdlet Terms
These terms can be used to characterize the structure of a cmdlet
("Cmdlet Overview - Powershell", 2019):
● Cmdlet Attribute
This is a required .NET attribute used in the declaration of a derived
cmdlet class. A cmdlet must have this feature to be a PowerShell cmdlet. In
addition to the cmdlet attribute, PowerShell also allows the cmdlet to have
other optional attributes.
● Cmdlet Parameter
This is a cmdlet public property defining the parameters available to
users or applications that will run the cmdlet. The parameters may be
required, positional, named, or switch, and will be shown in the core
command syntax section.
● Dynamic Parameter
This is a cmdlet parameter available only at runtime when certain
requirements are met.
● Parameter Set
This is a parameter collection supplied to a single command to
complete an explicit action. As will be seen in the command syntax section
below, a cmdlet may have more than one parameter set, but each parameter
set has a minimum of one unique parameter. A well-designed cmdlet has
unique parameters that are also required parameters.
● Methods for Processing Input
This is a cmdlet input-record processing method. Available methods
include System.Management.Automation.Cmdlet.BeginProcessing,
System.Management.Automation.Cmdlet.EndProcessing,
System.Management.Automation.Cmdlet.ProcessRecord, and
System.Management.Automation.Cmdlet.StopProcessing. The cmdlet is
required to override at least one of the first three during implementation.
Usually it overrides
System.Management.Automation.Cmdlet.ProcessRecord, since this method
is required by the cmdlet for every processed record. The first method is
required once to execute record pre-processing. The second is, analogously,
required once for record post-processing.
● Transaction
Transactions are commands that are arranged into a logical unit and
handled as one task. The properties of the command unit are firstly that the
whole task is treated as failed if one command in the unit fails. Secondly, it
is at the user’s discretion to reject or accept the individual commands within
a transaction. The cmdlets are required to declare transaction support as part
of the cmdlet attribute declaration.
● ShouldProcess Feature
PowerShell cmdlets can be designed to request user feedback before
making system changes. The design has two components. The first
component requires that the cmdlet make a ShouldProcess feature
declaration during cmdlet attribute declaration. The second component
requires that the cmdlet call the
System.Management.Automation.Cmdlet.ShouldCountinue and
System.Management.Automation.Cmdlet.ShouldProcess methods inside an
input processing method.
How Cmdlets Vary from Commands
Cmdlets vary from commands in other command-line interfaces in that
cmdlets:
● Are .NET Framework class instances rather than stand-alone
executables.
● Are constructed from code of varying lengths.
● Do not perform their own parsing, output formatting, or error
presentation. In PowerShell, these actions are conducted by the
runtime.
● Process pipeline input objects rather than text streams, and they
usually deliver pipeline output objects.
● Do single object processing in each instance, thus making them
record-oriented.
Cmdlet Base Classes
PowerShell allows cmdlets to derive from these base classes:
● The System.Management.Automation.Cmdlet base class:
Cmdlets deriving from this base class use the minimal collection
of PowerShell runtime dependencies. The main benefits of this
are twofold. First, it makes for smaller cmdlet objects impacted
less by PowerShell runtime changes. The second is that cmdlet
object instances can be created and invoked directly without
requiring the object’s PowerShell runtime invocation.
● The System.Management.Automation.PSCmdlet: This is used
for more complex cmdlets with greater access to the PowerShell
runtime. The greater access in turn allows for the calling of
scripts, accessing of providers, and accessing of the present
session state. The state of the current session is accessed by
obtaining and setting session preferences and variables. In this
case, the cmdlet object is larger and more impacted by changes to
the PowerShell runtime.
When extended PowerShell runtime access is not required, then the
cmdlet should derive from the System.Management.Automation.Cmdlet
class. The benefit of utilizing the PowerShell runtime is its increased cmdlet
execution logging facilities. Hence, deriving from this base class allows you
to use this facility to run logging-dependent auditing models. In addition,
deriving from the base class can be used to stop a cmdlet from executing
inside another cmdlet.
Methods For Processing Input
The virtual record processing methods offered by the
System.Management.Automation.Cmdlet class are:
● System.Management.Automation.Cmdlet.BeginProcessing:
This method provides a one-time pre-processing function. It is
optional.
● System.Management.Automation.Cmdlet.EndProcessing: This
method provides a one-time post-processing function. It is
optional.
● System.Management.Automation.Cmdlet.ProcessRecord: This
method provides a record-by-record processing function. Its calls
are mainly input-dependent. The frequency of its calls may be
zero or arbitrary.
● System.Management.Automation.Cmdlet.StopProcessing: This
method provides a stop-processing function. For example, this
method would be called when the user presses CTRL+C to stop
cmdlet processing.
Derived cmdlets have to override at least one of the first three methods
during implementation.
Cmdlet Attributes
These are .NET attributes for managing cmdlets. They are used by
cmdlets to specify and access required common functionalities offered by
PowerShell.
Cmdlet Names
PowerShell cmdlets are named using verb-noun pairs. The noun
specifies the computer resource the cmdlet will act on and the verb specifies
the action the cmdlet will take. Cmdlet names are defined as part of the
.NET class cmdlet declaration.
Cmdlet Structure Example
It is imperative at this stage to give specific form to this information
with an example. Our example will explore the structure of a cmdlet from a
programming perspective. The cmdlet is called a Send-Greeting cmdlet
obtained from "How To Write A Simple Cmdlet - Powershell" (2019). The
Send-Greeting cmdlet has a –Name input parameter that takes the user’s
name and outputs the greeting on the console. The cmdlet performs a very
simple task but satisfies all the requirements of a cmdlet that we have
previously explored.
In line with the structural requirements outlined above, let us consider
the steps involved in writing a cmdlet, which are as follows:
Step One: Use the Cmdlet attribute to declare the derived class as a cmdlet.
The Cmdlet attribute gives the cmdlet name verb-noun combination.
Step Two: Provide the name of the class.
Step Three: Specify that the cmdlet derives from one of the following
cmdlet base classes: System.Management.Automation.PSCmdlet or
System.Management.Automation.Cmdlet
Step Four: Use the Parameter attribute to specify the cmdlet parameter (-
Name)
Step Five: Override one of the required methods for input processing. In
this example, we choose to override
System.Management.Automation.Cmdlet.ProcessRecord.
Step Six: Format the output object. Use the
System.Management.Automation.Cmdlet.WriteObject to specify the
greeting display format:
Hello <UserName>!
These steps are performed by using the C# code shown in Figure 5
below ("How To Write A Simple Cmdlet - Powershell", 2019). The
structural description coupled with the cmdlet creation method provides the
basis for understanding PowerShell cmdlets and how they behave in the
PowerShell components.
----------- ----
Variable Scope
When a variable is created in a particular scope, then it is by default
only available in that scope. To illustrate, a variable created within a
function only becomes available within that function. A variable created
within a script only becomes available within that script. If, however, a
script is dot-sourced, then that variable is available in the local or current
scope. The default variable scope can be changed using an optional scope
modifier. The following variable assignment creates a $Servers variable:
>$Global:Servers= "Server01","Server02","Server03"
>$Global:Servers
Server01
Server02
Server03
This assignment statement is such that the scope of the variable will be
global even when created within a function or script.
Access to PowerShell aliases, functions, variables and drives is
protected. PowerShell achieves this by limiting the circumstances under
which a user can read and modify them. This is facilitated by PowerShell
scoping rules. Scoping rules ensure that items are not inadvertently changed
when it is not ideal to do so.
The basic scope rules are:
● Scopes are able to nest. Any outer scope is considered the
parent scope. Any scopes nested in that parent scope are its child
scopes.
● An item created in a scope becomes visible in that scope and
any of its child scopes, unless it is explicitly made private.
Aliases, variables, functions, or PowerShell drives can be placed
in one or multiple scopes.
● An item created within a specific scope can only be modified in
that scope, unless a different scope is specified for that purpose.
If an item is created within a scope and has the same name as an item
from another scope, then the first item to be created can be hidden under the
more recent item, but will not be overridden or modified.
Types of Scopes
PowerShell has the following scopes:
● Global: This is the scope during PowerShell start-up. Variables
that are in PowerShell at start-up were created within the global
scope. These include preference and automatic variables. This
applies analogously to functions. PowerShell user profile aliases,
variables, and functions at PowerShell start-up were also created
in this scope.
● Local: A current scope. This scope can be global or another
scope.
● Script: This is the scope which is generated when a script runs.
This is the scope where only the script commands run. This scope
is also the current or local scope for the script commands.
It is important to highlight that private is a visibility option for an item
rather than a scope. The private option of an item determines whether items
outside of its scope can view it.
Parent and Child Scopes
Another scope can be created either by executing a function or script,
starting a new session, or a new PowerShell instance. The newly created
scope is a nested child scope of the original parent scope. Thus, in
PowerShell, scopes are nested child scopes of the parent global scope. It is
possible to create multiple scopes and multiple recursive scopes. Items
within a parent scope can be used by its child scope unless explicitly made
private. The items created and modified within the nested child scope, by
default, will not impact the original parent scope. Items in the child scope
can affect objects in the parent scope if their scope is specifically specified
to be the parent scope when they are created.
Inheritance
A nested child scope will not inherit items such as aliases, variables,
and functions from its parent scope. An item of the original parent scope is
visible to the nested child scope, unless it is private. These items can be
modified by the nested child scope by specifically referencing the original
parent scope, however, they are not in the nested child scope. A nested child
scope does have a collection of items when it is created. These usually
include all aliases with the Option property set to AllScope. The Option
property will be discussed later in this chapter under Scope modifiers. This
includes all variables with the Option property set to AllScope. Items in a
specific scope can be found using the Get-Alias or Get-Variable cmdlets
invoked with the target scope as the value for the Scope parameter. To
illustrate, variables in the local scope can all be displayed with the
command:
>Get-Variable -Scope local
And those in the global scope with the following command:
>Get-Variable -Scope global
Scope Modifiers
A function name or alias may have an optional scope modifier. This
can be selected to be one of:
● global: This designates the scope where the name should exist
should be Global.
● local: This designates the scope where the name should exist
should be Local. The current and local scopes are the same.
● private: This designates the name to be Private and visible only
to the local scope.
● script: This designates the scope where the name should exist
should be Script. The Script scope is the nearest
● parent of the scope of the script file or Global otherwise.
● using: This allows variables of another scope to be accessed
when running scripts using cmdlets like Invoke-Command and
Start-Job.
● <variable-namespace>: A scope modifier that is created by a
PSDrive provider. The following are examples of the <variable :
namespace> syntax:
The script scope is the default script file scope. The local scope is the
default aliases scope and default functions scope, even when defined by a
script.
Using Scope Modifiers
A way to designate the scope when creating a new alias, variable, or
function is by using an optional scope modifier. In the case of variables, the
syntax is as follows:
>$[<scope-modifier>:]<name> = <value>
In the case of functions, the syntax is as follows:
>function [<scope-modifier>:]<name> {<function-body>}
This command is an assignment expression to create or modify a
variable $a in the local or current scope:
>$a = "a"
To create the variable $a such that it exists in a global scope, use a
scope modifier set to global, as follows:
>$global:a = "a"
Similarly, to create the variable $a such that it exists in the script scope,
use a scope modifier set to script, as follows:
>$script:a = "a"
The optional scope modifier can be used with functions. This function
definition will create a function that exists in the global scope:
>function global:Hello {
Write-Host "Hello, World"
}
Optional scope modifiers can also be used when denoting a variable
within another scope. The following denotes the $variable variable, initially
in the current scope and subsequently within the parent global scope:
>$variable
>$global:variable
The Using: Scope Modifier
The Using: scope modifier is a special modifier used to identify a local
variable in a remote command. When a modifier is not used, PowerShell
requires that variables that are part of remote commands be part of the
remote session variable definition. This scope modifier was introduced in
Windows PowerShell 3.0.
AllScope Option
An item with an Option property that is AllScope becomes visible to
and part of the nested child scope. Modifications to that item in any of the
scopes immediately affects all scopes where it is defined.
Managing Scope
A variety of cmdlets also have a parameter that can be used for
managing scope. This parameter is called the Scope parameter. This
parameter used in conjunction with the cmdlets provides a means to either
get, change or create items within a specified scope. This command finds
all cmdlet in the session with the Scope parameter:
>Get-Help * -Parameter scope
Variables which are visible within a specific scope can be found using
the Get-Variable cmdlet and its Scope parameter. The variables that will be
visible include parent scope variables, current scope variables, and global
variables. This command displays variables which are visible to the current
scope:
>Get-Variable -Scope local
Variables can be created in a scope specific manner using the optional
scope modifier or the Set-Variable cmdlet Scope parameter. This command
creates a variable with a scope that is global:
>New-Variable -Scope global -Name numeric -Value 1
The New-Alias, Get-Alias or Set-Alias cmdlets have a Scope
parameter that can be used for specifying the scope. This command will
create an alias whose scope is global:
>New-Alias -Scope global -Name command -Value cmd.exe
It is possible to still display functions within a scope but not use the
Scope parameter of a cmdlet. The approach is to first select the target scope
and then invoke the cmdlet Get-Item within the target scope.
It is important to note that cmdlets whose parameters include a Scope
parameter allow for scopes to be referred to by number. The number labels
the relative location of a scope to a different one. Scope 0 is the local or
current scope. Scope 1 is the direct parent scope. Scope 2 is the direct
parent of Scope 1, and the like. Numbered scopes have great utility in a
setting where multiple recursive scopes have been created.
Using Dot Source Notation with Scope
Functions and scripts stick to scope rules. They are created in a specific
scope and have an impact only on that scope with the exception being if a
parameter of a cmdlet or an optional scope modifier is used to alter the
scope. Dot sourcing can also add a function or script to the local scope.
When the script executes in the local scope, any aliases, variables and
functions created by the script become available to the local scope. A
function can be added to the local scope by typing a dot symbol (.) followed
by a space preceding the function path location and name within the call to
the function. To illustrate, a Sample.ps1 script located in the path C:\Scripts
can be executed within its script scope (which is the default scope for a
script) with this command:
>C:\Scripts\Sample.ps1
In order to run the Sample.ps1 scripts in the C:\Scripts folder then use
any of the following two commands:
>./Sample.ps1
>.\Sample.ps1
This command runs a Sample.ps1 script located in the path C:\Scripts
within the current scope (using dot sourcing):
>. C:\Scripts\Sample.ps1
When the call operator, designated with the symbol &, is used to
execute a script or function, then it is not included in the local scope. This
command executes the Sample.ps1 script in the located in the path
C:\Scripts by calling the script via &:
>& C:\Scripts\Sample.ps1
Any variables, functions, or aliases created while executing the
Sample.ps1 file will not be available within the local scope.
Restricting Without Scope
Some PowerShell concepts interact with or are comparable to scope.
These can be easily mistaken to have the behavior of or similar to scope.
Nested prompts, modules, and sessions are not session-nested child scopes
of the parent global scope but autonomous environments.
● Sessions
A session can be described as a self-contained environment on which
PowerShell operates. When a remote computer-based session is created,
PowerShell establishes a persistent connection with that computer. The
resulting session is based on this connection, which facilitates the running
of several associated commands. The session, being a self-contained
environment, has a separate individual scope, but the session is not itself a
nested child scope within the session that created it. The session begins with
an individual global scope. The original global scope is not dependent on
the new global scope of the remote session. Child scopes can be created in
the remote session. To illustrate, a script can be executed in the remote
session.
● Modules
Modules are used to deliver and share PowerShell tools. A module can
be characterized as a component within which aliases, variables, functions,
scripts, cmdlets, and other useful items can be contained. Unless
specifically stated, module items cannot be accessed externally of the
module. A module can, therefore, be added to a session and its public items
used without concerns that they will override the functions, scripts, cmdlets,
and other items within the session.
Modules are, by default, not loaded into the local scope but rather the
highest level of the current PowerShell session’s state. This state is either
the module session or global session state. When a session is supplemented
with a module, this action does not modify the scope of the session. When
your scope is global, then module loading will be at the global state. The
placement of any exports will be into the tables at the global level. If a
module, say module_2, is loaded from within another module, say
module_1, then module_2 will be loaded under the state of the session of
module_1, rather than the state of the global session. The placement of any
exports from module_2 will be into the state of the session of module_1.
In the case where the Import-Module cmdlet has a local Scope
parameter, exports will be placed into the object of the local scope instead
of at the highest level. If the Import-Module cmdlet with a global value is
used within a module to load a different module, then the imported module
is loaded with a global state rather than that of the module from which it
was imported. The exports of the imported module will analogously load
into the state of the global session as well. This feature was designed for a
module that could manage other modules. This is used by the PowerShell
WindowsCompatibity module to facilitate the loading into the state of the
global session loading of proxy modules.
● Nested Prompts
When the user enters a prompt that is nested, this prompt is part of the
PowerShell environment in the current scope. Similarly, during script
debugging, scripts have a default scope. When a script breakpoint is
reached, then the user will go into the script scope. In both cases, the nested
prompt is a self-contained environment with no individual scope.
● Private Option
To illustrate, suppose a user first creates a variable with a default scope
that is global and additionally has an Option property with a value that is
private. If the user proceeds to run a Get-Variable command in a script, then
this command will not display the variable. The optional global scope
modifier will not display this variable in this setting. The private option of a
variable can be set by invoking either the Set-Variable or New-Variable
cmdlets with an option parameter. Similarly, the private option of an alias
can be set by invoking either the Set-Alias or New-Alias cmdlets with an
option parameter.
● Visibility
The Visibility property operates on containers in a similar manner to
how the Option property operates on scopes. The Visibility property
assumes a value of either Private or Public. Items with a private value can
only be changed and viewed by items inside the container that created them.
They cannot be changed or viewed when their container is imported or
added.
Visibility operates in a different manner within a scope because it is
designed for containers. If an item has a private Visibility value within a
global scope, then it cannot be changed or viewed in all scopes. If the user
tries to change or view the value stored in a variable that has a private
Visibility value, this will result in a PowerShell error message.
Let us illustrate these concepts on scope with some examples.
Example: Change a Variable Value Only in a Script
In this illustration, we run a script command to change the stored value
inside the variable $ConfirmPrecedence. Initially, display the value stored
in the variable $ConformPreference in the current scope using this
command:
> $ConfirmPreference
This will yield:
High
Compile the following commands into a script file called Scope.ps1:
$ConfirmPreference = "Low"
"The value of `$ConfirmPreference is $ConfirmPreference."
Execute the script. The script execution alters the stored value in the
variable $ConfirmPreference and conveys it within the scope. The resulting
output should be similar to the following:
The value of $ConfirmPreference is Low.
The last step is to compare this with the current stored value of the
variable $ConfirmPreference within the local scope:
>$ConfirmPreference
This should generate the following output:
High
This illustration demonstrates that modifications to the stored variable
value within the scope that is script will have no impact on the stored value
of a variable in the global parent scope.
Example: View a Variable Value in Different Scopes
The optional scope modifiers can be used to display the stored variable
value in the current scope as well as the parent scope.
Initially, we store a value inside a variable called $test within the scope
that is global:
>$test = "Global"
Then, create a script called Sample.ps1. Inside the script place an
assignment statement that stores a value in a variable called $test and use an
optional scope modifier to state both $test variable versions.
$test = "Local"
"The local value of `$test is $test."
"The global value of `$test is $global:test."
Executing the Sample.ps1 script will generate output that is similar to
the following:
The local value of $test is Local.
The global value of $test is Global.
After the script executes, the $test variable value definition in the
global scope applies in the local session:
> $test
It will generate the following output:
Global
Example: Change the Value of a Variable in a Parent Scope
A stored variable value can be modified and displayed, unless it is
protected by either using a private value for the Option property or a
different method, in its parent scope. Initially, assign a stored value to a
variable called $test within the scope that is global:
>$test="Global"
Then, create a script file called Sample.ps1. Inside the script, place a
variable assignment that stores in a variable called $test and use an optional
scope modifier to state both $test variable versions:
$global:test = "Local"
"The global value of `$test is $global:test."
After the script executes, the $test variable value in the global scope is
modified:
>$test
This should generate the following output:
Local
Example: Creating a Private Variable
A variable is said to be private if it has a private value for the Option
property. Private variables can only be modified or viewed within the scope
where they are created, even though they are inherited within the nested
child scope. This command assigns a value to a $ptest variable, which is a
variable that is private, within the current scope:
>New-Variable -Name ptest -Value 1 -Option private
The $ptest value can be modified and viewed in the current scope:
>$ptest
>$ptest=2
>$ptest
This will yield the following output:
>$test
1
>$ptest = 2
>$ptest
2
Then, create a script called Sample.ps1. Place these commands in the
script to view the $ptest value:
"The value of `$Ptest is $Ptest."
"The value of `$Ptest is $global:Ptest."
The resulting output has no value for $ptest because the variable
cannot be viewed from the scope of the script:
"The value of $Ptest is ."
"The value of $Ptest is ."
Saving Variables
Variables that are created in a session can only be accessed in that
session. They are only part of that session and are removed once the session
is closed. Variables can be made available to other PowerShell sessions by
adding them to your profile. To illustrate, add this command into your
profile to modify the $VerbosePreference variable value in other sessions:
$VerbosePreference = "Continue"
To add a command to your profile, modify the file stored in the path
$PROFILE using a text editor.
Variable: drive
The Variable provider exposes the drive designated Variable:. This
drive behaves and appears similar to the drive of a file system, but instead
contains information about your session variables, such as their values. To
change your location to be drive designated Variable:, use this command:
>Set-Location Variable:
The variables and items in the drive designated Variable: can be
displayed using the Get-ChildItem or Get-Item cmdlets.
>Get-ChildItem Variable:
A value for a specific variable can be obtained by specifying the drive
name and variable name using the notation for a file system. To illustrate,
this command displays the value of the $PSCulture variable, which is a
PowerShell automatic variable:
>Get-Item Variable:\PSCulture
This will yield the following according to your PowerShell culture
setting:
Name Value
---- -----
PSCulture en-US
To display the help file for the drive designated Variable: and its
associated provider use this command:
>Get-Help Variable:
Variable Syntax with Provider Paths
The PowerShell variable syntax dollar symbol ($) can also be used
with provider paths. The provider path should be prefixed with the dollar
symbol ($) to access the provider contents. The provider should, however,
implement the interface designated IContentCmdletProvider. The built-in
providers supporting this notation are shown in Table 3.
Variable Cmdlets
PowerShell has a set of cmdlets that are designed to manage variables.
In order to list these cmdlets, type:
>Get-Command -Noun Variable
It will generate the following list:
CommandType Name ModuleName
Chapter Summary
The key points of the chapter were:
● A PowerShell variable is a unit of memory in which values are
stored.
● The three general categories of PowerShell variables are user-
created variables, automatic variables, and preference variables.
● Cmdlet functioning can be greatly enhanced through the use of
variables.
● Variables can be used in commands and expressions.
● Variables are only available in the scope in which they are
created.
● Variable constitute an important component of scripting.
In the next chapter you will learn about PowerShell pipelines.
Chapter Four:
Pipelines
Pipelines
In PowerShell, a sequence of commands joined together with pipeline
operators is called a pipeline. A pipeline operator is represented by the
vertical-bar symbol (|). In the pipeline, each operator transmits the output of
the previous command to the following command.
In this arrangement, the results emanating from the first command in the
pipeline can be relayed as the input for the following command. This process
can be repeated by transmitting the resulting output of this command to
another command. The net result is an integrated complex command that can
be decomposed into simpler ones. The commands can additionally be arranged
into a logical sequence.
In the PowerShell console, the pipeline command is as follows:
> Command-One | Command-Two | Command-Three
In the above illustration, the output objects emitted by Command-One are
transmitted to Command-Two. Then, Command-Two performs operations on
the input objects and transmits them as inputs to Command-Three. Command-
Three similarly performs operations on the input objects and transmits them
further into the pipeline. Since there are no further commands in this particular
pipeline, the output of Command-Three is displayed on the command-line. In
a PowerShell pipeline, commands are executed successively from the left to
the right. The overall process is treated as one operation and the results are
shown in real-time as they are generated.
Let us consider some simple examples to illustrate the process.
Example : Pipeline to start a Notepad Process and then stop it
This command invokes the Get-Process and Stop-Process cmdlets in a
pipeline on a Notepad process object:
>Get-Process notepad | Stop-Process
This pipeline command combines two commands. The first invokes the
Get-Process cmdlet with the Name parameter value of notepad to acquire an
object that represents a running Notepad process. It then uses the pipeline
operator to transmit the object down the pipeline to the cmdlet Stop-Process
that stops the process. In the second command, the Stop-Process cmdlet has no
parameters such as ID or Name specifying the input process. This is as a result
of the process being submitted as part of pipeline input.
Example: Filtering, sorting, and displaying text files in a folder
This example uses the Get-Child-Item cmdlet with the -Path parameter to
identify text files in the present location. It then selects the files greater than
100 bytes in length, and uses the Sort-Object cmdlet with the Property
parameter to order them based on their lengths. The output shows the sorted
files in a formatted table displaying the file lengths and names.
This is performed by the following command:
>Get-ChildItem -Path *.txt |
Where-Object {$_.length -gt 100} |
Sort-Object -Property length |
Format-Table -Property name, length
The pipeline above combines four commands in a stipulated order. This
diagram illustrates the pipeline process as well as how the results of a
preceding command are transmitted down the pipeline.
Get-ChildItem -Path *.txt
| (FileInfo objects for *.txt)
V
Where-Object {$_.length -gt 100}
| (FileInfo objects for *.txt)
|( Length > 100 )
V
Sort-Object -Property Length
| (FileInfo objects for *.txt)
|( Length > 100 )
| ( Sorted by length )
V
Format-Table -Property name, length
| (FileInfo objects for *.txt)
|( Length > 100 )
| ( Sorted by length )
| ( Formatted in a table )
V
Name Length
---- ------
tmp1.txt 82920
tmp2.txt 114000
tmp3.txt 114000
Using Pipelines
A majority of cmdlets in PowerShell incorporate pipeline support as part
of their design. In these cases, it is possible to pipe the output of a cmdlet with
a Get noun to the cmdlet of an identical noun. To illustrate, the output of the
cmdlet Get-Service can be piped either to the Stop-Service or Start-Service
cmdlets.
The following command uses the Get-Service and Start-Service cmdlets
in a pipeline to start the service WMI on the local machine:
> Get-Service wmi | Start-Service
Another PowerShell pipeline support feature is that there is a great
number of utility cmdlets such as Sort-Object, Where-Object, Group-Object
and Get-Member, which are widely used in pipelines. These cmdlets are able
to accept all object types as input. Let us look at another example.
Example: Sort Processes on the Computer by Open handles
This illustration, sorts the processes on the local computer by their open
handles quantity:
>Get-Process | Sort-Object -Property handles
This will generate the following output (where only the first few lines are
shown):
PS C:\> Get-Process | Sort-Object -Property handles
Handl NPM( PM( WS(K CPU(s Id SI ProcessName
es K) K) ) )
------- ------ ----- ----- ------ -- -- -----------
0 16 908 8140 91.55 96 0 Registry
0 0 52 8 0 0 Idle
0 0 1576 42573 6,118. 2376 0 Memory
2 28 Compression
Another PowerShell pipeline support feature is that objects that are piped
can be sent to the export, output, and formatting cmdlets, including Export-
CSV, Export-Clixml, Out-File, Format-Table, and Format-List.
Example: List properties for a Process Object
The following illustration uses the cmdlet Format-List to show a property
list of an object representing a process.
>Get-Process winlogon | Format-List -Property *
This will generate the following output (where only the first few lines are
shown):
PS C:\> Get-Process winlogon | Format-List -Property *
Name : winlogon
Id : 46476
PriorityClass : High
FileVersion : 10.0.17134.1 (WinBuild.160101.0800)
HandleCount : 258
WorkingSet : 8130560
PagedMemorySize : 2375680
PrivateMemorySize : 2375680
With practice, you will notice that coalescing simple commands to
construct pipelines conserves typing and time while increasing the efficiency
of scripting.
Pipeline Processing
In this section, we will look at the manner in which input objects are
processed in pipeline execution and bound to the parameters of cmdlets.
One-at-a-time processing
When objects are piped to a receiving command, the process is similar to
using a command parameter for submitting the objects.
Let us illustrate these concepts with a simple example.
Example: Display a table of service objects
In this illustration, the pipeline is used to present a service object table.
The pipeline is as follows:
>Get-Service | Format-Table -Property Name, DependentServices
The first few lines of the output generated are as follows:
PS C:\> Get-Service | Format-Table -Property Name, DependentServices
Name DependentServices
---- -----------------
AbtSvcHost {}
AdobeARMservice {}
AdobeFlashPlayerUpdateSvc {}
AJRouter {}
ALG {}
AppIDSvc {applockerfltr}
Appinfo {}
AppMgmt {}
AppReadiness {}
AppVClient {}
In a practical sense, the output is similar to one produced with the Format-
Table cmdlet parameter InputOject, which formats a set of objects. To
illustrate, the set of services can be saved in a variable to be submitted using
the parameter InputObject. In order to illustrate this, let us consider the
following commands:
>$services = Get-Service
>Format-Table -InputObject $services -Property Name,
DependentServices
This sequence will generate the following output (where only the first few
lines are shown):
PS C:\> $services = Get-Service
PS C:\> Format-Table -InputObject $services -Property Name,
DependentServices
Name DependentServices
---- -----------------
AbtSvcHost {}
AdobeARMservice {}
AdobeFlashPlayerUpdateSvc {}
AJRouter {}
ALG {}
AppIDSvc {applockerfltr}
Appinfo {}
AppMgmt {}
AppReadiness {}
AppVClient {}
AppXSvc {}
AssignedAccessManagerSvc {}
Notice how the outputs are essentially the same. You can think of the
pipeline as a “shorthand” method for the longer, more explicit method called
above.
Alternatively, the command may be embedded in the parameter
InputObject:
>Format-Table -InputObject (Get-Service) -Property Name,
DependentServices
This will generate the following output (where only the first few lines are
shown):
PS C:\> Format-Table -InputObject (Get-Service) -Property Name,
DependentServices
Name DependentServices
---- -----------------
AbtSvcHost {}
AdobeARMservice {}
AdobeFlashPlayerUpdateSvc {}
AJRouter {}
ALG {}
AppIDSvc {applockerfltr}
In this arrangement there is a critical difference. In a pipeline, when
multiple objects are piped to a receiving command, PowerShell sends the
objects one a time. But when the command parameter is used, these objects
are transmitted as a simgular array object. The difference may be minor but its
effects are substantial. When a pipeline is executed, PowerShell will
enumerate any type implementing the interface IEnumerable and then send
members one at a time across the pipeline. The exception to this approach is
[hashtable], which needs a GetEnumerator() method call.
In these examples, a hashtable and an array are pipe inputs of the cmdlet
Measure-Object. The purpose of the setup is to count, in each case, the number
of input objects the cmdlet receives in the pipeline. Both the hashtable and the
array have multiple members. Only the pipeline involving the array results in
a one at a time enumeration.
In PowerShell, the following command:
>@(1,2,3) | Measure-Object
will generate:
Count :3
Average :
Sum :
Maximum :
Minimum :
Property :
and the following command:
>@{"One"=1;"Two"=2} | Measure-Object
will generate:
Count :1
Average :
Sum :
Maximum :
Minimum :
Property :
In a similar manner, if multiple process objects are piped from the cmdlet
Get-Process to the cmdlet Get-Member, PowerShell will send each of the
process objects, one at a time, to the Get-Member cmdlet. The Get-Member
cmdlet will show two types of input. The first is the input process objects’
.NET type. The second is their methods and properties. The command is as
follows:
>Get-Process | Get-Member
This call will generate the following output (where only the first few lines
are shown):
TypeName: System.Diagnostics.Process
Name MemberType Definition
---- ---------- ----------
Handles AliasProperty Handles = Handlecount
Name AliasProperty Name = ProcessName
NPM AliasProperty NPM = NonpagedSystemMemorySize64
PM AliasProperty PM = PagedMemorySize64
It is important to note that, in the above output, the Get-Member cmdlet
eliminates any duplicates. Hence, if all the objects are of one type, then only
this object type is displayed. If we use the parameter embedding method,
which embeds the process in the InputObject parameter, with the following
command:
>Get-Member -InputObject (Get-Process)
it will generate the following output (where only the first few lines are
shown):
PS C:\> Get-Member -InputObject (Get-Process)
TypeName: System.Object[]
Name MemberType Definition
---- ---------- ----------
Count AliasProperty Count = Length
Add Method int IList.Add(System.Object value)
Address Method System.Object&, mscorlib,
Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
Address(int )
Clear Method void IList.Clear()
If, on the other hand, the Get-Member InputOject parameter is used, then
it receives System.Diagnostics.Process objects as an array as well as one unit.
It shows the properties of the objects based on an array. This is illustrated by
the array symbol [] following the type name of the System.Object.
This output result may not match what was intended. If, however, the
mismatch is understood or expected, then it may still be useful for the purpose
at hand. To illustrate, there is a Count property that can be used by array
objects. This can be used to obtain a number count of the running computer
processes. This can be done with the following command:
>(Get-Process).count
The important takeaway from these examples is that objects transmitted
down a pipeline are supplied one at each time.
Line Continuation
The PowerShell has a pipeline support feature that enhances the
readability of pipelines. This feature allows the pipeline to split across many
lines. If the last entry on a line is the pipe operator symbol (|), then PowerShell
will join the two lines through its parser so the current command may continue
constructing the pipeline.
As an example, consider the following pipeline command:
>Command-One | Command-Two | Command-Three
One can write this line as:
>Command-1 |
Command-2 |
Command-3
The resulting leading spaces on subsequent lines have no impact. This
indentation format is desirable because it enhances the readability. PowerShell
7.0 has extra support for pipeline continuation by allowing the pipeline
operator symbol (|) to be placed when the line starts.
The following two illustrative templates show how to use this new
functionality when constructing a PowerShell pipeline command:
# Wrapping with a pipe at the beginning of a line (no backtick required)
Get-Process | Where-Object CPU | Where-Object Path
| Get-Item | Where-Object FullName -match "AppData"
| Sort-Object FullName -Unique
# Wrapping with a pipe on a line by itself
Get-Process | Where-Object CPU | Where-Object Path
|
Get-Item | Where-Object FullName -match "AppData"
|
Sort-Object FullName -Unique
It is important to note that when working with these illustrative templates
in the console, code containing pipelines should be pasted at the start of the
console line using only CTRL+V rather than right-clicking a mouse and
pasting from the drop-down menu. CTRL+V will submit the entirety of the
code, whereas right-clicking will submit the input line by line. The new
templates, in turn, allow for lines to terminate without a vertical bar symbol (|).
In this scenario, PowerShell will interpret the input as complete if a line
terminates without a vertical bar symbol when submitted line by line and
execute the line.
Exercises
4.1. How are PowerShell commands connected to form a pipeline?
4.2. In what order are two connected commands in a pipeline processed?
4.3. How is object processing handled in a pipeline operation?
4.4. What are the two ways in which cmdlet parameters can accept pipeline
input in a pipeline?
4.5. What are PowerShell's parameter binding component's criteria for
associating pipeline input objects with cmdlet parameters?
4.6. How does PowerShell pipe multiple objects to a command in a pipeline?
Chapter Summary
The key points of the chapter were:
● A PowerShell pipeline is a series of commands connected by a
pipeline operator.
● Pipelines allow commands to be combined in order to create more
powerful commands.
● PowerShell has features that make designing and using pipelines a
useful scripting function.
● Pipelines make scripting more efficient.
In the next chapter you will learn about functions.
Chapter Five:
Functions
Functions
A function constitutes a list of statements with a user-assigned name.
The function can be executed by typing its name. The statements included
within the list will execute as if they had been typed on the console.
A function may be simple, such as :
>function Get-PowerShellProcess { Get-Process PowerShell }
A function may also be more complex. It may have the complexity of
an application or a cmdlet. Functions, similar to cmdlets, may contain
parameters as well. These parameters may be positional, named, dynamic,
or switch.
Either a pipeline or console can read a function. Functions can be
constructed to return values, which can be shown. These values can also be
made part of variable assignments passed to cmdlets or other functions. A
function can have a particular return value specified through the keyword
return. This keyword will not suppress or affect other output designated to
be returned from the function. However, using return will trigger an exit
from the function on that line.
A function can have separate statement lists categorised by the
keywords Process, Begin, and End. These three types of statement lists
have distinct methods for handling pipeline input. A filter is a particular
class of functions designated with the keyword Filter. Functions can also be
constructed to behave or operate similar to cmdlets without requiring C#
programming.
Function Syntax
The syntax for specifying a function is as follows:
function [<scope:>]<name> [([type]$parameter1[,[type]$parameter2])]
{
param([type]$parameter1 [,[type]$parameter2])
dynamicparam {<statement list>}
begin {<statement list>}
process {<statement list>}
end {<statement list>}
}
A function has the following components:
● List of PowerShell statements demarcated with brace symbols
{}
● Function keyword
● Scope which is optional
● User selected name
● Named parameters which are optional
Simple Functions
A simple function is simply a standard function deployed without the
optional components (named parameters and a scope modifier).
A simple function syntax is as follows:
function <function-name> {statements}
To illustrate, this function starts PowerShell in Administrator mode:
function Start-PSAdmin {Start-Process PowerShell -Verb RunAs}
The function can be used by invoking it with its name:
>Start-PSAdmin
Statements can be added to a function by typing different statements on
separate lines, or by demarcation with the semicolon symbol (;). At this
stage, it would be informative to illustrate the concepts with a simple
example.
Example: A function for finding .jpg files that were changed after a
specific date
This function identifies all files with extension .jpg located in the path
$env:UserProfile changed after the date specified by the $Start variable:
function Get-NewPix
{
$start = Get-Date -Month 1 -Day 1 -Year 2010
$allpix = Get-ChildItem -Path $env:UserProfile\*.jpg -Recurse
$allpix | Where-Object {$_.LastWriteTime -gt $Start}
}
Convenient functions can be organized into a toolbox and added into
your profile.
Function Names
Functions may have any name assigned to them. The best practice
recommendation, however, is for the names to conform to established
standard PowerShell naming conventions for all commands. These
conventions include a recommendation that the names comprise a pairing of
a verb and a noun. In doing so, the noun characterizes the item the function
acts on, and the verb describes the function’s action. The verb should be one
of the approved PowerShell standard verbs for commands. Standard verbs
help keep the names of all commands coherent, straightforward, and
obvious to users.
Functions Parameters
Named Parameters
A function can define named parameters of an arbitrary number. The
named parameters have an assigned default value. Parameters can be
defined inside or outside the function brace symbols {}. When defined
within the brace symbols, a Param keyword must be used. The syntax is as
follows:
function <name> {
param ([type]$parameter1[,[type]$parameter2])
<statement list>
}
When parameters are defined outside the brace symbols, then the
required internal brace Param keyword may be omitted. The syntax is as
follows:
function <name> [([type]$parameter1[,[type]$parameter2])] {
<statement list>
}
To illustrate, the external brace syntax may be used as follows:
>Function Add-Numbers($one, $two) {
$one + $two
}
The internal brace approach is the preferred method, but either
approach will give the desired result. When the function is executed, the
supplied parameter value is assigned to each variable that contains the name
of the parameter. The variable value may be used inside the function.
Let us consider a simple example to illustrate these concepts.
Example: Get-SmallFiles function
In this illustration, a function named Get-SmallFiles is defined. The
function includes a parameter called $Size. The function shows all files
smaller than a user than the $Size parameter value, and does not include
directories:
function Get-SmallFiles {
Param($Size)
Get-ChildItem $HOME | Where-Object {
$_.Length -lt $Size -and !$_.PSIsContainer
}
}
Inside the function, the variable called $Size, which matches the
defined parameter name, can be used. The function can be used with this
command
>Get-SmallFiles -Size 50
A value can be passed to the named parameter without referring to its
name. To illustrate, this command produces identical results to the
command above:
> Get-SmallFiles 50
A parameter can have an assigned default value. The default value can
be defined using a definition expression. The definition expression involves
placing an equal sign = to the right of the parameter followed by the default
value. This is illustrated in the following revised Get-SmallFiles function
definition:
function Get-SmallFiles ($Size = 100) {
Get-ChildItem $HOME | Where-Object {
$_.Length -lt $Size -and !$_.PSIsContainer
}
}
If Get-SmallFiles is typed with the value omitted, the function will
assign a 100 value to $Size. If, on the other hand, a value is supplied, the
function will be evaluated with the supplied value. The other option is to
attach a help string describing the parameter default value. This is done by
first attaching a PSDefaultValue attribute to the parameter description. The
second step is to specify the PSDefaultValue Help property of the attached
attribute. This example shows how to provide a brief help string describing
the specified default value for the Size parameter of the function Get-
SmallFiles,by attaching a PSDefaultValue attribute:
function Get-SmallFiles {
param (
[PSDefaultValue(Help = '100')]
$size = 100
)
}
Positional Parameters
A function parameter that does not have a name assigned to it is called
a positional parameter.
PowerShell assigns values to positional parameters by using the order
of the supplied function’s parameter values. The values for the positional
parameters are assigned to a variable called $args, which is an array type,
and ordered to match with the function’s parameter values. To illustrate, the
first value supplied after the name of the function is assigned a value on the
first of the positions in $args, $args[0].
The Get-Extension function in this illustration attaches a file name .txt
extension to a supplied file name positional parameter:
>function Get-Extension {
$name = $args[0] + ".txt"
$name
}
The output that is obtained when the function is invoked in the
command line is as follows:
>function Get-Extension {
$name = $args[0] + ".txt"
$name
}
>Get-Extension myTextFile
is:
myTextFile.txt
Switch Parameters
A function parameter that has a name but not a required value is called
a switch parameter. The parameter is supplied to the function by using only
its name. The switch parameter is defined by specifying its type as [switch]
before its name in the function definition, as shown in the following
illustration:
>function Switch-Item {
param ([switch]$on)
if ($on) { "Switch on" }
else { "Switch off" }
}
When the switch parameter name occurs after the name of the function,
the function will show a "Switch on" signal. If the name of the switch
parameter is not supplied, however, then the function will show a "Switch
off" signal. These concepts are illustrated in the following command
sequence:
>function Switch-Item {
param ([switch]$on)
if ($on) { "Switch on" }
else { "Switch off" }
}
>Switch-Item -on
Switch on
>Switch-Item
Switch off
The switch parameter can also be supplied to the function using
Boolean value notation. This is illustrated in the following command
sequence:
>Switch-Item -on:$true
Switch on
>Switch-Item -on:$false
Switch off
Functions and Pipeline Objects
Any function has the ability to take a pipeline input. In this input, the
Process, Begin, and End keywords control how the function handles the
processing of the pipeline. The following syntax illustrates how these three
keywords are used:
function <name> {
begin {<statement list>}
process {<statement list>}
end {<statement list>}
}
The statement list following the Begin keyword runs only once at the
start of the function evaluation. It is important to note that when the
Process, Begin, and End keyword blocks are used, all function code has to
be contained within those blocks. As long as one of these blocks is defined,
any code that resides outside of the blocks will not be recognized or
executed.
The statement list bound to the Process keyword runs once for each
pipeline input object. While this block is executing, each of the objects of
the pipeline will be allocated, one object at a time, to the variable $_, which
is an automatic variable. When all the pipeline objects have been received
by the function, then the statement list with the keyword End runs once.
If no Process, Begin, or End keywords are found in the function
definition, then PowerShell treats all function statements collectively as that
of an End block statement list.
The following function uses only the Process keyword. The function
shows output from a sample pipeline:
>function Get-Pipeline
{
process {"The value is: $_"}
}
This properties of this function can be demonstrated using the
following command, which enters a comma-separated list of numbers as the
pipeline input:
>1,2,4 | Get-Pipeline
The value is: 1
The value is: 2
The value is: 4
When a function is used within a pipeline, the pipeline input objects of
the function are allocated to the variable $input, which is an automatic
variable. The function will run all statements within the Begin block before
any of the pipeline objects are sent to the function. The function will run all
statements within the End block after all pipeline input objects have been
received.
The following example illustrates the properties of the automatic
$input variable for a function using a Begin keyword and an End keyword.
The function Get-PipelineBeginEnd is defined as follows:
>function Get-PipelineBeginEnd
{
begin {"Begin: The input is $input"}
end {"End: The input is $input" }
}
This command and its resulting output illustrates how the function
processes a pipeline input:
>1,2,4 | Get-PipelineBeginEnd
Begin: The input is
End: The input is 1 2 4
In the case that the Process keyword is additionally used in the
function definition, then each pipeline input object is removed from the
$input variable and allocated to $_. The following function has a Process
keyword in its definition:
function Get-PipelineInput
{
process {"Processing: $_ " }
end {"End: The input is: $input" }
}
In this illustration, each function pipeline input object is sent for
processing by the Process keyword statement block after it is piped. The
Process keyword statements will run a single object at a time. The
automatic $input variable will be empty when the evaluation of the function
arrives on the End block evaluation. This is demonstrated by the following
output:
>1,2,4 | Get-PipelineInput
Processing: 1
Processing: 2
Processing: 4
End: The input is:
Filters
A function type that is specifically designed to run each pipeline object
is called a filter. A filter is similar to a function that has only a
Process{<statement list>} block. The following shows the filter syntax:
filter [<scope:>]<name> {<statement list>}
The example filter below collects log entries from a running pipeline
and then shows only either the entry message portion or the whole entry:
filter Get-ErrorLog ([switch]$message)
{
if ($message) { Out-Host -InputObject $_.Message }
else { $_ }
}
Function Scope
A function only exists within the scope which it is created. If a script
creates a function, then all statements within the script can access the
function. A function created by a script cannot be accessed from within the
console by default. The function scope can additionally be specified. To
illustrate, the function below shows had to add a function to a global scope:
function global:Get-DependentSvs {
Get-Service | Where-Object {$_.DependentServices}
}
A function that has a global scope can be used at the console, in other
functions, and in scripts. In general, each function creates a function scope.
The items created within the function, such as variables, exist only within
that function’s scope.
A function variable cannot, however, be called outside the function
scope even if the function itself is within the global scope. This scenario is
illustrated by the following command sequence and output:
function global:Send-Greeting {
[CmdletBinding()]
Param(
[Parameter(Mandatory=$true)]
[string] $Name
)
Process
{
Write-Host ("Hello " + $Name + "!")
}
}
>Send-Greeting –Name user
Hello user!
>$Name
>
Function: Drive
All PowerShell filters and functions are stored automatically within the
drive called Function:. The Function provider in PowerShell exposes this
drive. The way to refer to this drive is to type a colon symbol (:) after the
word Function, similar to a file system drive. This command shows all
current session functions:
>Get-ChildItem function:
More information about the Function: drive is available from the
function provider help topic accessed by typing Get-Help Function. The
function statement list is specified in the function definition according to
the function syntax. To illustrate, to show the commands of the PowerShell
built-in Help function, enter the following command:
>(Get-ChildItem function:help).Definition
An alternative syntax is shown below:
>$function:help
Reusing Functions
Functions created in a session are only available for use in that
particular session. Once the session is closed, the function is no longer
available and calling the function in a new session will return an error.
However, a function can be made universally available in any current or
future session by adding the function to your profile or saving it to a script
file.
Consider below the following function called Get-CmdletAlias, which
obtains the alias of every cmdlet:
function Get-CmdletAlias ($cmdletname) {
Get-Alias |
Where-Object -FilterScript {$_.Definition -like "$cmdletname"} |
Format-Table -Property Definition, Name -AutoSize
}
One useful PowerShell profile is the “all users for all hosts” profile
accessed by either locating the $PSHOME\Profile.ps1 file or entering the
following script, $env:windir\System32\PowerShell\v1.0\Profile.ps1, in
PowerShell. To save a function for future use, next enter the function in the
script file. In doing so, the function can be invoked in all PowerShell
session sessions. This scenario is demonstrated as follows, using the Get-
Command cmdlet example from above:
>Get-CmdletAlias Get-Command
Definition Name
---------- ----
Get-Command gcm
The function may also be saved in a script file, such as
AliasFunction.ps1, located in the folder <path>. Then, invoking the
following command will bring the function into the current session as
needed:
<path>\AliasFunction.ps1
Function Help
The help file for cmdlets, scripts, providers, and functions can be
displayed by calling the cmdlet Get-Help. The Get-Help cmdlet invoked
with the name of the function supplied for the Name parameter gets help for
the specified function. To illustrate, to show the help file for the function
Get-MyDisks using the Get-Help cmdlet, enter this command:
>Get-Help Get-MyDisks
Help for a function can be written by making use of either of the
following methods:
● Comment-Based function help: Use comment-based special
keywords to create a topic in the function help file. The help comments may
be placed in one of four locations. These are the lines before the keyword
function, outside the function braces, after the right brace of the statement
list, or outside the function braces before the left brace of the statement list.
We will discuss the syntax in further detail in a later section.
● XML-Based function help: This type of help is similar to that of
cmdlets. This type of help is required in help-topic localization for multiple
languages and is created using the .ExternalHelp version of the comment-
based help special keyword within the comments.
Comment-Based Help Syntax
Comment-based function help, which has an identical format to that
produced using XML files, is viewed by invoking the Get-Help cmdlet and
using any of its parameters.
The general comment-based help syntax is as follows:
<#
.<help keyword>
<help content>
#>
or:
.<help keyword>
<help content>
Comment-based help may take the form of a single-line comment or a
sequence of comments. To add a multi-line comment, enclose the comment
block within <# and #>. Alternatively, precede each comment line with the
comment symbol (#). In such an arrangement, PowerShell will interpret all
#’d block lines as comments.
Help topic comment-based lines must be neighboring. In other words,
there must be one or more empty lines separating the final comment line of
the non-help comment topic and the first line of the comment-based help
topic.
Comment-based help sections are defined using keywords. A dot
symbol (.) must precede a keyword within the comment-based help. The
order of the keywords may be arbitrary, and the names of the keywords are
not case-sensitive. To illustrate, a function description is preceded by the
keyword "Description" as follows:
<#
.Description
Get-Function displays the name and syntax of all functions in the
session.
#>
One or more keywords are required in a comment block. A comment
block may have the same keyword appearing multiple times, such as
"EXAMPLE". The keyword-designated help content may range many lines
and starts on the subsequent line to the keyword.
The function comment-based help may be placed in a single location
from a choice of the following three:
● Preceding the keyword used for functions. Only a single line
that is blank is permitted between the keyword for the function
and the final function help line.
● When the body of the function concludes
● When the body of the function starts
To illustrate:
<#
.<help keyword>
<help content>
#>
function Get-Function { }
or:
function Get-Function
{
# function logic
<#
.<help keyword>
<help content>
#>
}
or:
function Get-Function
{
<#
.<help keyword>
<help content>
#>
# function logic
}
are all appropriate templates for creating a comment-based function help
topic.
Exercises
5.1. How does PowerShell run function commands?
5.2. Can functions have parameters? If so, what types?
5.3. Where are the two places where function parameters can be read?
5.4. What are the components of a function?
5.5. What types of functions can accept input from a pipeline?
5.6. What is the scoping rule for functions?
5.7. What does the Function: drive do?
Chapter Summary
The key points of this chapter were:
● A function is a list of statements with a user assigned name.
● Functions should have names that follow PowerShell naming
conventions.
● Parameters may be used with functions.
● Filters are a type of function that runs on each of the objects in a
pipeline.
● Functions can be enhanced through the creation comment-
based or XML-based help.
In the next chapter you will learn about modules and providers.
Chapter Six:
Modules and Providers
In this chapter we will consider modules and providers. Modules and
providers are environment customization constructs that enhance the
functioning of PowerShell. Modules may be used to provide access to more
PowerShell commands while providers allow for enhanced access to data in a
specialized data store.
In the first part of this chapter we will look at modules, and the second
part, we will look at providers.
Modules
In this section, we explore PowerShell modules. In PowerShell a unit that
holds commands is called a module. The commands may be functions,
variables, aliases, cmdlets, and providers. All providers and cmdlets in a
PowerShell session are delivered by a snap-in or a module. Administrators
who receive modules can add the commands in the modules to their
PowerShell sessions and use them just as they would use the built-in
commands. Command authors may use modules to arrange commands and
deliver them to others.
PowerShell users can obtain modules, introduce the commands contained
in the modules to their sessions, and supplement the built-in commands
available to them.
Module Auto-Loading
Introduced in Windows PowerShell 3.0, an automatic module import
occurs whenever a command contained within an installed module is called an
initial time in a session. This reduces the amount of profile configuration,
environment set-up, and module management before module commands can
be available for use. Module commands can be located more easily. This
feature also be used with the increased functionality introduced to cmdlets like
Get-Command, which are able to obtain commands directly from installed
modules. This allows for a command to be found and used without the need
for module importing.
The following commands show how this feature could prompted using the
Get-Content cmdlet:
● Running the command:
>Get-Content -Path C:\Test\Text.txt
● Getting the command:
>Get-Command Get-Content
● Displaying the command help:
>Get-Help Get-Content
This feature considers Get-Command calls, including the wildcard
symbol (*), to be exploratory rather than for use. This feature only applies to
modules located in the path stored in the PSModulePath variable. Modules not
in this location require an import using the cmdlet Import-Module. The second
example, where this feature does not apply, are commands that make use of
PowerShell providers. This feature can be further configured using the
$PSModuleAutoloadingPreference variable.
For example, the use of a command that requires the WSMan: drive, such
as the Get-PSSessionConfiguration cmdlet, might require that you run the
Import-Module cmdlet to import the Microsoft.WSMan.Management module,
as this module includes the WSMan:drive. It may still be required that you run
the Import-Module command to import a module and use the
$PSModuleAutoloadingPreference variable to enable, disable, and configure
automatic importing of modules.
Using a Module
Using a module requires three tasks:
1.
Module installation.
2.
Find commands added by the module.
3.
Use commands added by the module.
In this chapter we consider how to perform these tasks alongside other
useful information for module management.
Module Installation
PowerShell has a set of pre-installed modules. The following statements
explain the steps to install a module on the local machine.
First, create a current user Modules directory. This may be performed
with the following command, which makes use of the New-Item cmdlet:
>New-item -Type Directory -Path
$HOME\Test\WindowsPowerShell\Modules
Copy the module folder (called C:\ps-test\MyModule in this illustration)
to the Modules folder. This may be performed by using the following
command, which uses the Copy-Item cmdlet:
>Copy-Item -Path C:\ps-test\MyModule -Destination `
$HOME\Documents\WindowsPowerShell\Modules
Modules may be installed in any arbitrary location, but housing modules
within a default location make for easier module management.
Finding Installed Modules
Modules that are installed within a default location, but have not been
imported into the session, can be found with this command:
>Get-Module -ListAvailable
Modules that have been imported into the session can be found using this
command:
>Get-Module
Finding Module Commands
Available module commands can be found using the cmdlet Get-
Command. The cmdlet Get-Command parameters can be used to filter
commands by noun, name, or module. The following command finds module
commands:
>Get-Command -Module <module-name>
To illustrate, this command finds commands within the module
BitsTransfer:
>Get-Command -Module BitsTransfer
It will generate the following output:
PS C:\> Get-Command -Module BitsTransfer
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Add-BitsFile 2.0.0.0 BitsTransfer
The Get-Date cmdlet is only hidden because the invocation of the Get-
Command cmdlet with the All parameter, which will display every available
Get-Date command, generates the following output:
CommandType Name ModuleName
Chapter Summary
The main points of the chapter were:
● A module is a grouping of PowerShell commands, such as
cmdlets, providers, functions, workflows,variables and aliases.
● Modules are used by installing them and using the commands that
the module added.
● Commands in a module can be used through module auto-loading.
● Modules are usually managed using cmdlets.
● Providers are .NET Framework-based programs for accessing data
in specialized data stores.
● Providers are installed through modules.
● Providers are usually managed using cmdlets.
● Providers standardize the way in which data is accessed in
PowerShell.
In the next chapter you will learn about scripting.
Chapter Seven:
Scripting
Scripting
In PowerShell, a plain text file with at least one command is called a
script. Scripts in PowerShell have a .ps1 extension to their file name. Script
execution is similar to cmdlet invocation. The path with the file name is typed,
and parameters are used to set options and submit data values. Scripts can be
run on either remote sessions or local ones. Script writing is useful because it
stores a command for easy sharing and repeated use. An additional advantage
is that command execution becomes a matter of file name submission with an
accompanying script location path on the console. Scripts may be large
intricate code structures or a simple one command directive.
There are supplementary features associated with a script, such as data
support sections, parameter usage, security digital signing, and the special
comment #Requires. Help topics can also be written for included functions as
well as the script itself.
Running a Script
In PowerShell, script execution depends on execution policy. The
PowerShell default policy for execution is "Restricted". This precludes all
script execution, incorporating local computer authored scripts. This execution
policy requires modification before script execution. The registry stores this
policy, and its computer-based adjustment is one-time. This method can be
used for modifying the PowerShell execution policy. Enter this command on
the console:
>Set-ExecutionPolicy AllSigned
Alternatively:
>Set-ExecutionPolicy RemoteSigned
The modification takes effect instantaneously. Submit the complete path
as well as then name for script execution. To illustrate, to execute the script
Get-ServiceLog.ps1 with path C:\Script, the command is:
>C:\Scripts\Get-ServiceLog.ps1
A current directory script execution requires either a dot symbol (.)
current directory representation or a current directory complete path followed
by a backslash symbol \ for path. To illustrate, the command to execute a
current location script called ServiceLog.ps1 is:
>.\Get-ServiceLog.ps1
If parameters are part of the script, submit the parameter values and their
names following the file name of the script. To illustrate, this command makes
use of the parameter ServiceName within the script Get-ServiceLog for a
service activity log request belonging to WinRM:
>.\Get-ServiceLog.ps1 -ServiceName WinRM
A PowerShell security feature is that scripts will not run when you
double-click the script icon in the file explorer or when you type the script
name without the full path, even if the script is in the current directory.
Run With Option
Introduced in Windows PowerShell 3.0, the File Explorer can be used to
run scripts by using the “run with PowerShell” feature.
The feature may be used as follows:
Open File Explorer, right-click on script and select the PowerShell option.
This feature is intended for script execution that does not generate console
output and for scripts without required parameters.
Remote Execution
Remote script execution can be conducted using the parameter FilePath of
the cmdlet Invoke-Command. The complete file name and path can be
submitted as a FilePath parameter value. The script has to be housed on a local
path or a locally accessible folder. This command executes the script Get-
ServiceLog.ps1 two remote computers(ServerA and ServerB):
>Invoke-Command -ComputerName ServerA,ServerB -FilePath `
C:\Scripts\Get-ServiceLog.ps1
Script Help
The cmdlet Get-Help obtains help topics for cmdlets, other command
types and scripts.
A script help topic may be displayed by invoking the Get-Help cmdlet
with the script name and script path as the Name parameter value. The path
may be omitted if the path of the script is part of the environment variable
PATH.
To illustrate, the script ServiceLog.ps1 help file may be displayed with
this command:
>get-help C:\admin\scripts\ServicesLog.ps1
Writing a Script
All valid PowerShell commands may be housed in a script consisting of
functions, control structures, pipeline commands, and single commands.
A script may be written as follows:
Scripts in Modules
A collection of associated PowerShell resources which may be delivered
as a group is a module. Modules may be used to arrange functions, other
resources, and scripts. Modules may be used to deliver programs and access
programs from trusted sources. Scripts may be included in modules, or
alternatively, a new module called the script module may be created. A script
module is made up primarily or entirely of a script file and its supporting
resources. A script file with a name extension .psm1 is a script module.
Other Features of Scripts
PowerShell offers a variety of beneficial features that may be used within
scripts:
● #Requires: It is possible to make use of a statement called
#Requires, which precludes a script executing without a stipulated
PowerShell version and snap-ins or modules.
● $PSCommandPath: This has the complete name and path of the
executing script. This is a valid parameter in every script. This
variable, which is automatic, was established as part of Windows
PowerShell 3.0.
● $PSScriptRoot: This has the folder from within a script that is
executing. This variable was established as part of Windows
PowerShell 3.0.
● $MyInvocation: This variable, which is automatic, has information
on the executing script, as well as information concerning the
manner in which it was invoked or started. This variable, as well as
its characteristics, may be used to obtain information concerning the
script while the script executes. $MyInvocation.MyCommand.Path
has the file name and path belonging to the script.
$MyInvocation.Line has the command which begins the execution
of the script, as well as every value and parameter associated with it.
Starting with Windows PowerShell 3.0, the $MyInvocation variable
contains two additional properties that hold information on the
script that invoked or called the executing script. These property
values become filled in the circumstance that a script is the caller or
invoker.
● PSCommandPath: This has the complete name and full path
belonging to the script which invoked or called the executing script.
● PSScriptRoot: This has the folder belonging to the script which
invoked or called the executing script.
● Data sections: The keyword Data may be used to demarcate logic
from data within scripts. Data sections are also able to make
localization less difficult.
● Script Signing: A digital signature may be added to a file that is a
script. Contingent on the specific execution policy it is possible to
make use of digital signatures in precluding the execution of scripts
which may have within them commands that are not safe.
Note: It is crucial that you do not confuse the following. $PSScriptRoot
and $PSCommandPath are automatic variables. They hold information
on the executing script. PSScriptRoot and PSCommandPath are
properties belonging to the variable $MyInvocation. They hold
information on the script that called the currently executing one.
Script Blocks
A set of expressions or statements that can be operated as one item is
called script block. A script block is able to return values and accept
arguments. Functionally, the script block constitutes a list of statements
enclosed in brace symbols {} and has the following syntax:
{<statement list>}
A script block returns the output of all commands in the script block
either as a single object or as an array of objects. A return value can be
specified using the return keyword. The return keyword does not affect or
suppress other output returned from the script block, but it does exit the script
block at that line.
The script block, in a similar manner to functions, may include
parameters. The Param keyword is used for assigned parameters that are
named, as is shown in this syntax:
{
Param([type]$Parameter1 [,[type]$Parameter2])
<statement list>
}
It is important to note that, in contrast to a function, it is not possible for
the script block to define parameters externally of the braces. Similar to
functions, the keywords DynamicParam, Process, Begin and End may be
included in script blocks.
Script Block Use
A script block is an instance of a Microsoft .NET Framework type
System.Management.Automatioc.ScriptBlock class. Commands can also have
script block parameter values. To illustrate, the Invoke-Command cmdlet has a
ScriptBlock parameter that takes a script block value, as shown in the
following command:
>Invoke-Command -ScriptBlock { Get-Process }
The next call will generate the following output (of which only the first
few lines are shown):
PS C:\> Invoke-Command -ScriptBlock { Get-Process }
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
The Invoke-Command cmdlet is also able to run script blocks, which hold
parameter blocks. The parameter ArgumentList can be used to assign
parameters by position:
>Invoke-Command -ScriptBlock { param($p1, $p2)
"p1: $p1"
"p2: $p2"
} -ArgumentList "First", "Second"
Which will generate the following output:
PS C:\> Invoke-Command -ScriptBlock { param($p1, $p2)
>> "p1: $p1"
>> "p2: $p2"
>> } -ArgumentList "First", "Second"
p1: First
p2: Second
PS C:\>
Variables can be used to store and execute script blocks. In the following
command sequence, the first command stores a script block in a variable $a,
and the second command passes $a to the Invoke-Command cmdlet:
>$a = { Get-Service BITS }
>Invoke-Command -ScriptBlock $a
This will generate the following output:
PS C:\> $a = { Get-Service BITS }
PS C:\> Invoke-Command -ScriptBlock $a
Status Name DisplayName
------ ---- -----------
Running BITS Background Intelligent Transfer Ser...
This script block considered in the example just considered makes use of
the keyword param to define two parameters. The $p1 parameter is made to be
bound to the "First" string and the parameter $p2 is analogously made to be
bound to the “Second” string. Variables may be used to execute and store
script blocks. The example below shows how this may be done:
>$a ={ param($p1, $p2)
"p1: $p1"
"p2: $p2"
}
>&$a -p2 "First" -p1 "Second"
This will generate the following output:
PS C:\> $a ={ param($p1, $p2)
>> "p1: $p1"
>> "p2: $p2"
>> }
PS C:\> &$a -p2 "First" -p1 "Second"
p1: Second
p2: First
PS C:\>
Another method to run script blocks that are housed within a variable is
by making use of the call operator. Similar to the Invoke-Command cmdlet, it
runs a script block within a nested child scope. It may facilitate a less difficult
method for the use of parameters within script blocks.
Example: Saving output from script block execution with call operator
The following sequence of commands:
>$a = { 1 + 1}
>$b = &$a
>$b
It will generate the following output:
2
Example: Saving output from script block execution with the Invoke-
Command cmdlet
The following sequence of commands:
>$a = { 1 + 1}
>$b = Invoke-Command $a
>$b
It will generate the following output:
2
It is possible to store the returns from the script block within a variable by
making use of assignments.
Delay-bind type script blocks
Delay-bind type script blocks are used on parameters that are typed which
accept pipeline input by PropertyName or Value. The object that is piped in
can be referenced within the delay-bind type script block with the use of the
pipeline type variable $.
This is illustrated by the following pipeline command, which appends
old_ to the name of the files in directory config.log:
># Renames config.log to old_config.log
>dir config.log | Rename-Item -NewName {"old_" + $_.Name}
It is important to note the following concerning delay-bind type script
blocks being parameters:
● All parameters names used within delay-bind type script blocks
must be plainly specified.
● The parameter may not be type [object] or [scriptblock] and
cannot be untyped.
● An error will be received if a delay-bind type script block is used
when input from a pipeline is not provided.
This concept is illustrated in this example and its resulting output:
>Rename-Item -NewName {$_.Name + ".old"}
will lead to the error shown below:
PS C:\> Rename-Item -NewName {$_.Name + ".old"}
Rename-Item : Cannot evaluate parameter 'NewName' because its
argument is specified as a script block and there is no input. A script block
cannot be evaluated without input.
At line:1 char:22
+ Rename-Item -NewName {$_.Name + ".old"}
+ ~~~~~~~~~~~~~~~~~~
+ CategoryInfo : MetadataError: (:) [Rename-Item],
ParameterBindingException
+ FullyQualifiedErrorId :
ScriptBlockArgumentNoInput,Microsoft.PowerShell.Commands.RenameItem
Command
Exercises
7.1. What is the file name extension for PowerShell scripts?
7.2. Why are scripts useful when running commands?
7.3. Why is it not possible to run a script on PowerShell's default execution
policy?
7.4. What two commands can be used to change PowerShell's execution policy
to one that can be used to run scripts on a local computer?
7.5. What command should you issue to get the help topics for a script?
7.6. How do you write a script?
7.7. How are parameters defined in a script?
7.8. How do you run a script in the current scope instead of the script scope?
7.9. What is a script module and what is its file name extension?
7.10. What is a script block?
Chapter Summary
The key points of the chapter were:
● A script is simply a plain text file containing one or more
PowerShell commands.
● Scripts are basically run in PowerShell under the appropriate
execution policy by typing the path and name of the script file.
● PowerShell scripts are written using a text editor or a script editor.
● Scripts can be enhanced through the creation comment-based or
XML-based help for the script.
● Scripts run in script scope.
● A set of PowerShell statements or expressions can be used as a
single unit using a script block.
In the next chapter you will learn about Remoting and the PowerShell
Gallery.
Chapter Eight:
Remoting and PowerShell Gallery
This chapter considers Remoting and the PowerShell Gallery. Remoting
involves the running of PowerShell commands and scripts on remote
computers. The PowerShell Gallery is a central repository of PowerShell
content and component of PowerShell.
In the first part of this chapter, we will look at Remoting; in the second
part, we will consider the PowerShell Gallery.
Remoting
Remote commands may be run on many computers or a single computer
through the use of either persistent or temporary connections. Two key
parameters in PowerShell remoting are ComputerName and Session. The
ComputerName parameter is designed for situations where a single command
or several unrelated commands are run on one or many computers. The
Session parameter establishes a persistent connection to a remote computer.
This chapter will consider a set of illustrations that will demonstrate the
various methods used in executing remote commands.
After attempting these basic commands, help topics can divulge
additional information about the cmdlets used when running the following
commands. The help topics also explain and supply details and descriptions on
how to modify these commands based on user needs and details.
Interactive Sessions
The simplest approach to executing a remote command is by starting an
interactive session with a remote computer. At the start of the session, the
remote commands execute on the computer that is remote similar to how direct
commands on that remote computer would execute. In such an interactive
session, only a single computer may connect. Using the cmdlet Enter-
PSSession will start the interactive session. For example, to start an interactive
session with computer Server01, use the following command:
>Enter-PSSession Server01
The Server01 computer connection is indicated by a command prompt
change:
Server01\PS
At this stage, Server01 commands may be typed. To close the interactive
session, use:
>Exit-PSSession
ComputerName Parameter Cmdlets
Many cmdlets include a parameter called ComputerName, which helps
obtain objects from remote computers. Since WS-Management-based
remoting within PowerShell is not used by this type of cmdlet, its
ComputerName parameter can be used within all computers with PowerShell
running. The remoting requirements of the system as well as PowerShell
remoting configuration are not required on the computers.
The cmdlets that use the parameter ComputerName are shown in Table 6
below.
To illustrate, the following command displays the services that are on the
remote computer Server01:
>Get-Service -ComputerName Server01
In most cases, cmdlets supporting remoting exclusive of distinctive
configurations include a ComputerName parameter but not a Session
parameter. These may be displayed in the session with this command:
>Get-Command | Where-Object {
$_.Parameters.Keys -contains 'ComputerName' -and
$_.Parameters.Keys -notcontains 'Session'
}
This will generate the following output (of which the first few lines are
shown):
PS C:\Test> Get-Command | Where-Object {
>>
>> $_.Parameters.Keys -contains 'ComputerName' -and
>>
>> $_.Parameters.Keys -notcontains 'Session'
>>
>> }
CommandTy Name Version Source
pe
----------- ---- ------- ------
1. Install-Module
The default installation path for Install-Module is:
$env:ProgramFiles\WindowsPowerShell\Modules.
Administrator access is required to install modules in this path. If the
Scope parameter in the Install-Module cmdlet is set to value
CurrentUser, the module installation path will be:
$env:USERPROFILE\Documents\WindowsPowerShell\Modules
2. Install-Scripts
The default installation path for Install-Script is:
$env:ProgramFiles\WindowsPowerShell\Scripts
Administrator access is needed to install scripts to this path.
If the Scope parameter in the Install-Script cmdlet is set
to value CurrentUser, the script installation path will be:
$env:USERPROFILE\Documents\WindowsPowerShell\Scripts
Install-Script and Install-Module, by default, install the latest version of a
specified package.The RequiredVersion parameter value can be used to install
a specific version.
Package Update
Packages installed from the gallery may be updated by running either the
[Update-Script][] or [Update-Module][] cmdlets. When the [Update-Script][]
cmdlet is invoked without parameters, it will try updating all scripts that were
installed by invoking the Install-Script cmdlet. The Name parameter may be
used to update specific scripts. Analogously for modules, [Update-Module][]
will try updating all modules that were installed using the Install-module
cmdlet. The Name parameter may also be used to update specific modules.
List Packages installed from PowerShell Gallery
The Get-InstalledModule cmdlet may be used to determine the modules
that were installed from the Gallery. This command displays all modules in the
system obtained directly from the Gallery. Analogously for scripts, the Get-
InstalledScripts cmdlet may be used to ascertain the scripts that were installed
from the Gallery. This command, analogously to modules, will display all
scripts in the system obtained directly from the Gallery.
Exercises
8.1. What cmdlet is used to start an interactive session on a computer?
8.2. How many cmdlets have a ComputerName parameter in PowerShell 7.0?
8.3. How is remote command run using the Invoke-Command cmdlet?
8.4. What cmdlet is used to create a persistent connection to a remote
computer?
8.5. How are commands run in persistent connection?
8.6. How are local scripts run on remote computers?
8.7. What module is used to acquire PowerShell content from the PowerShell
Gallery?
8.8. What type of content is contained in PowerShell Gallery packages?
8.9. What command is used to install the latest NuGet provider in PowerShell?
8.10. What command is used to install the latest version of PowerShellGet in
PowerShell?
8.11. What cmdlet is used to update modules installed from the PowerShell
Gallery?
8.12. What cmdlet is used to update scripts installed from the PowerShell
Gallery?
8.13. What cmdlet is used to obtain a list of all modules installed from the
PowerShell Gallery?
8.14. What cmdlet is used to obtain a list of all scripts installed from the
PowerShell Gallery?
Chapter Summary
The key points of the chapter were:
● PowerShell commands can be run remotely on a single computer
or collection of computers.
● The easiest way to run remote commands is through an interactive
session.
● A more efficient way to run remote commands is through a
persistent connection.
● Remoting allows for PowerShell scripts to be run remotely.
● PowerShell Gallery is a central repository for PowerShell content.
● The PowerShellGet module contains cmdlets that are used for
discovering, installing, and publishing PowerShell packages.
Answers to Exercises
1.1. PowerShell can be run using command line options from another tool
such as Cmd.exe or bash or by starting a new session on the PowerShell
command line.
1.2. PowerShell, .NET Core and C#.
1.3. Yes, by using the PowerShell extension.
1.4. PowerShellGet module.
1.5 It is a package repository containing scripts, modules, and Desired State
Configuration (DSC) resources can be downloaded and leveraged.
1.6. VS Code --install-extension command line switch providing the path a
.vsix file. as follows: code --install-extension <File path>/myextension.vsix
2.1. 15
2.2. In simple terms, cmdlets perform an action and typically return a
Microsoft .NET Framework object to the next command in the pipeline.
2.3. It makes it possible to perform customized administrative actions in
PowerShell.
2.4. Implementing a cmdlet class that derives from one of two specialized
cmdlet base classes.
2.5. Verb and Noun specification to invoke the cmdlet.
2.6. 35
2.7. Using command syntax diagrams.
2.8. Aliases are alternate names for cmdlets or command elements such as a
function, script, file, or executable file.
2.9. 5. They are Get-Alias, New-Alias, Set-Alias, Export-Alias, and Import-
Alias.
2.10. Get-Help cmdlet.
3.1. A unit of memory in which values are stored.
3.2. Text strings that start with a dollar ($) sign.
3.3. These may be difficult to use.
3.4. No.
3.5. Yes.
3.6. Using the Get-Variable cmdlet.
3.7. Variables are not limited to specific types of objects.
3.8. By the .NET types of the values of the variables.
3.9. Enclose the variable name with curly braces {}.
3.10. Variables are only available in the scope in which they are created.
3.11. Using the PowerShell Variable provider. It behaves similar to a file
system drive. Contains PowerShell variables in your session and their
values.
3.12. With command: Get-Command -Noun Variable
4.1. Using a pipeline operator.
4.2. Left to right.
4.3. As a single operation.
4.4. By-Value or By-PropertyName.
4.5.Parameters must be able to accept input from a pipeline. Parameters
must be able to accept the type of object sent or a type that can be converted
to the expected type. Parameters are not used in the command.
4.6. One at a time.
5.1. As if typed on the command prompt.
5.2. Yes, named, positional, switch and dynamic.
5.3. Command line or from the pipeline.
5.4. A Function keyword, a scope (optional), a name that you select, any
number of named parameters (optional), and one or more PowerShell
commands enclosed in braces {}
5.5. Any.
5.6. A function only exists in the scope in which it was created.
5.7. Automatically stores all functions and filters in PowerShell.
6.1. Modules allow administrators who write commands to organize them
and share them with others.
6.2. All cmdlets and providers in a PowerShell session are added by a
module or a snap-in.
6.3. Get-Module -ListAvailable
6.4. Get-Module
6.5. By adding the Import-Module command into your PowerShell profile.
6.6. $Env:PSModulePath
6.7. A name conflict is when more than one command has the same name.
Name conflicts are caused by importing modules when the imported
module has commands that have the same name as other commands or
items in the session.
6.8. They can result in important commands being hidden or replaced.
6.9. Alias, Certificate, Environment, FileSystem, Function, Registry,
Variable and WSMan.
6.10. Get-PSProvider
6.11. The same way you would data on a file system drive.
6.12. Get-Help <provider-name>
7.1. .ps1
7.2. Commands in a script can be run simply by typing the script path and
file name.
7.3. The default PowerShell execution policy is "restricted,” and prevents
all scripts from running, including those written on the local computer.
7.4. Set-ExecutionPolicy AllSigned , Set-Executionpolicy RemoteSigned
7.5. get-help <path to script>\<ScriptName>.ps1
7.6. Using a text editor (such as Notepad) or a script editor (such as VS
Code).
7.7. Using the Param keyword.
7.8. Using dot sourcing.
7.9. A script module is a module that consists entirely or primarily of a
script and supporting resources. .psm1
7.10. A set of statements or expressions that can be used as a single unit.
8.1. Enter-PSSession cmdlet
8.2. 16
8.3. Using the ComputerName and ScriptBlock parameters.
8.4. New-PSSession cmdlet
8.5. Using the Invoke-Command cmdlet with the Session and ScriptBlock
parameters.
8.6. Using the Invoke-Command cmdlet with the ComputerName and
FilePath parameters.
8.7. PowerShellGet module.
8.8. Modules, DSC resources, role capabilities, and scripts.
8.9. Install-PackageProvider -Name NuGet -Force
8.10. Install-Module -Name PowerShellGet -Force
8.11. [Update-Module][] cmdlet
8.12. [Update-Script][] cmdlet
8.13. Get-InstalledModule cmdlet
8.14. Get-InstalledScript cmdlet
Final Words
In this book we covered beginner PowerShell concepts in a
comprehensive manner. The tour included an introduction to the main
components of PowerShell and the main tools available for using
PowerShell to do computer systems administration. The tools considered
included cmdlets, commands, variables, functions, pipelines, modules,
providers, scripting, package management, and remoting. This introductory
book also provided a means to access and develop other PowerShell tools.
These means included an elementary introduction to the .NET Core SDK
which can be used for developing custom cmdlets, modules and providers.
References
"2019 – Powershell". 2019. Devblogs.Microsoft.Com .
https://devblogs.microsoft.com/powershell/2019/feed/ .
"About_Aliases - Powershell". 2020. Docs.Microsoft.Com.
https://docs.microsoft.com/en-
us/powershell/module/microsoft.powershell.core/about/about_aliase
s?view=powershell-7 .
"About_Command_Syntax - Powershell". 2020. Docs.Microsoft.Com.
https://docs.microsoft.com/en-
us/powershell/module/microsoft.powershell.core/about/about_comm
and_syntax?view=powershell-7 .
"About_Core_Commands - Powershell". 2020. Docs.Microsoft.Com.
https://docs.microsoft.com/en-
us/powershell/module/microsoft.powershell.core/about/about_core_
commands?view=powershell-7 .
"About_Do - Powershell". 2019. Docs.Microsoft.Com.
https://docs.microsoft.com/en-
us/powershell/module/microsoft.powershell.core/about/about_do?
view=powershell-7.
Arntzen, Mike. 2019. "3 Reasons Why You Should Learn How To Use
Powershell". Amaxra.Com. Accessed December 20.
https://www.amaxra.com/blog/3-reasons-why-you-should-learn-
how-to-use-powershell .
"Cmdlet Overview - Powershell". 2019. Docs.Microsoft.Com.
https://docs.microsoft.com/en-
us/powershell/scripting/developer/cmdlet/cmdlet-overview?
view=powershell-7 .
"Command (Computing)". 2019. En.Wikipedia.Org.
https://en.wikipedia.org/wiki/Command_(computing ).
"Get Started With The Powershell Gallery - Powershell". 2019.
Docs.Microsoft.Com. https://docs.microsoft.com/en-
us/powershell/scripting/gallery/getting-started?view=powershell-7 .
"How To Write A Simple Cmdlet - Powershell". 2019.
Docs.Microsoft.Com. https://docs.microsoft.com/en-
us/powershell/scripting/developer/cmdlet/how-to-write-a-simple-
cmdlet?view=powershell-7 .
"Install .NET Core SDK On Windows, Linux, And Macos - .NET Core".
2019. Docs.Microsoft.Com. https://docs.microsoft.com/en-
us/dotnet/core/install/sdk?pivots=os-linux#all-net-core-downloads .
"Install .NET Core SDK On Windows, Linux, And Macos - .NET Core".
2019. Docs.Microsoft.Com. https://docs.microsoft.com/en-
us/dotnet/core/install/sdk?pivots=os-linux#all-net-core-downloads .
"Installing Powershell Core On Linux - Powershell". 2019.
Docs.Microsoft.Com. https://docs.microsoft.com/en-
us/powershell/scripting/install/installing-powershell-core-on-linux?
view=powershell-7 .
"Installing Powershell Core On Windows - Powershell". 2019.
Docs.Microsoft.Com . https://docs.microsoft.com/en-
us/powershell/scripting/install/installing-powershell-core-on-
windows?view=powershell-7 .
"Introducing .NET Standard | .NET Blog". 2016. .NET Blog .
https://devblogs.microsoft.com/dotnet/introducing-net-standard/ .
"Managing Extensions In Visual Studio Code". 2019.
Code.Visualstudio.Com .
https://code.visualstudio.com/docs/editor/extension-gallery .
".NET Core And Visual Studio Code". 2019. Code.Visualstudio.Com.
https://code.visualstudio.com/docs/languages/dotnet .
"Online IT Training Videos, IT Certification Training | CBT Nuggets".
2019. CBT Nuggets .
https://www.cbtnuggets.com/blog/certifications/microsoft/5-
microsoft-certs-that-absolutely-require-powershell .
"Powershell Editing With Visual Studio Code". 2019.
Code.Visualstudio.Com.
https://code.visualstudio.com/docs/languages/powershell .
"Powershell Scripting - Powershell". 2019. Docs.Microsoft.Com .
https://docs.microsoft.com/en-us/powershell/scripting/overview?
view=powershell-7 .
"Powershell". 2020. En.Wikipedia.Org .
https://en.wikipedia.org/wiki/PowerShell.
"Running Visual Studio Code On Linux". 2019. Code.Visualstudio.Com.
https://code.visualstudio.com/docs/setup/linux .
"Running Visual Studio Code On Macos". 2019. Code.Visualstudio.Com.
https://code.visualstudio.com/docs/setup/mac .
"Running Visual Studio Code On Windows". 2019.
Code.Visualstudio.Com.
https://code.visualstudio.com/docs/setup/windows .
"Scripting Languages". 2018. En.Wikipedia.Org .
https://en.wikipedia.org/wiki/Category:Scripting_languages.
"System Administrator". 2019. En.Wikipedia.Org.
https://en.wikipedia.org/wiki/System_administrator .
"The Powershell Gallery - Powershell". 2019. Docs.Microsoft.Com.
https://docs.microsoft.com/en-
us/powershell/scripting/gallery/overview?view=powershell-7 .