Search for previous posts

Blog History

April 26, 2021

Remote Desktop Gateway setup

This is a quick and easy guide to setup Remote Desktop Gateway in the context of a private LAN environment, so we are not concerned with a publicly signed cert as the traffic will not be exposed to the internet. In this scenario, we are making the DC the target instance and using a self signed cert.

Add the Remote Desktop Gateway service

Create a self signed cert, import this certificate into the Trusted Root Store of whatever source computers you will be connecting from.


Include the gateway itself as one of the servers 

Make sure to start the TSGateway service

Modify the RDP settings to use the gateway. Make sure to uncheck "Bypass RD Gateway server for local addresses". This is key as all traffic will be over RFC 1918 CIDRs.

Optional: create shortcut on desktop to use the above settings

Verify you can see the connection is showing up on the gateway

April 23, 2021

Automatic, silent S3 bucket mount in Windows

A free and open source process setup using Rclone [1] to mount an S3 bucket  to Windows.

Install Rclone and it's dependency WinSFP:

choco install rclone

choco install winfsp

Configure Rclone and select appropriate options in the wizard (instructions are very simple and should not require me to explain them):

rclone config

Note, you will need an IAM user or EC2 instance profile with the correct S3 bucket permissions. Test your configuration with one of the below commands. You can mount to a folder or as a network drive:

rclone mount <rclone config name from wizard>:<s3 bucket name/folder> %userprofile%\<desired location or name of folder>

or

rclone mount <rclone config name from wizard>:<s3 bucket name/folder> E: --network-mode

If the config is correct, you will see the bucket mounted correctly and the Rclone service running. 

To make this automatic and silent, put the above command into a .cmd file, download the "Quiet" [2] .exe, and place both of them in the same folder. Create a scheduled task to run at login. Test that the task runs correctly. You can kill it by ending the Rclone service in the task manager. 

As a backup for the end user if the bucket becomes unmounted, create a shortcut to the name of the scheduled task you created and give it a appropriate icon.

This is the path for the shortcut --> C:\Windows\System32\schtasks.exe /run /tn "S3 Mount"

References:

[1] https://rclone.org/

[2] http://www.joeware.net/freetools/tools/quiet/

February 27, 2021

AWS Terraform automation: Round 2

This is a round 2 update on my Terraform playbook(my initial playbook was just one giant file). I decided to learn how to break everything up into three separate sections while referencing the outputs from other files, enabling me to selectively apply sections. There is the VPC portion, the logging portion, and finally launching the EC2 instances. You can run only one or all of them depending on your needs. I have thoroughly commented everything so it should be readily understandable. 

NOTE: When using this to continually manage resources, it's very important to keep state files safe and in a central place for team access. They can be uploaded to remote cloud destinations, such as a S3 bucket, through a Terraform module. In my example, everything is just kept local.

I also included a picture below of my VSCode setup, as it might give you a helpful visual of how to efficiently work with this powerful tool, as well as understand the moving pieces a little better. I would advise using VSCode, as there are great extensions for many different languages, and more importantly their folder sidebar enables you to pop in and out of directories/files, or have them split screen. It also has a built in shell to run the code. This makes it easy to work the code in a single monitor. Another feature is the shortcut "Ctrl + /", which will enable you to comment out a whole block of code that is already highlighted. VERY helpful when trying to build/test code, or to just delete a whole resource without deleting your code.

Use Chocolatey to install and update Terraform. IMPORTANT: I have updated the code to utilize some newer features that enable segmentation. You must have version 14> for this code to work.

You can find the files here at my GitHub:

https://github.com/centifanto/Terraform

For each region, you will need to copy whole file structure into a separate folder. Would look like this:

  • Terraform
    • East
      • 1VPC
      • 2Logging
      • 3EC2

As mentioned, each one of these can be applied separately. The steps are initialize, plan, and apply. You need to do this in each directory to apply the code:

  • terraform init
    • This initializes the directory that you are in, downloading modules and dependencies for that specific plan.
  • terraform plan -var-file="cidr_region.tfvars" -out test.tfplan
    • This will validate the plan to ensure syntax and variables are correct. It uses .tfvars files to input the unique variables for the CIDR and region. It then spits this into a ready to go plan.
    • When in directories != 1VPC, you will need to specify the path to this tfvars file like this:
      • "C:\Users\%userprofile%\OneDrive\VScode\Terraform\Terraform_demo\1VPC\cidr_region.tfvars"
  • terraform apply test.tfplan
    • Builds your plan
  • terraform destroy -var-file="cidr_region.tfvars"
    • Destroys all of the resources you just built
    • Conversely, you can also comment out items in your config and Terraform will see them as "removed". The next time you run "apply" it will ask you if you want to destroy those commented out resources.
Pay attention to the red boxes in the image below.
  • You will need a data.tf file in each directory that is not 1VPC. This data file points the directory back to find the information from the terraform.tfstate file.
  • The terraform.tfstate file contains all of the actual resource ID's. This state file also has hooks into the outputs.tf file that enable you to define any information you want extracted and usable by other config files.



February 19, 2021

AWS S3 limited bucket policy

{
    "Version""2012-10-17",
    "Statement": [
        {
            "Effect""Allow",
            "Action""s3:ListBucket",
            "Resource""arn:aws:s3:::84-nexpoint-fileserver"
        },
        {
            "Effect""Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource""arn:aws:s3:::84-nexpoint-fileserver/*"
        },
        {
            "Effect""Allow",
            "Action""s3:ListAllMyBuckets",
            "Resource""*"
        }
    ]
}

February 2, 2021

Ubuntu automount DFS/SMB share

Install necessary utility:

sudo apt install cifs-utils

Create credentials file called /.smbcredentials for mounting permissions. Use this instead of putting the credentials in the connection string, as later we will be using an automount file and we don't want others to read the credentials:

username=UbuntuMount
password=password
domain=DOMAIN

Restrict the file to root:

sudo chmod 0600 /.smbcredentials

Create the mountpoint --> /mnt/DFSshare

Insert the following string into the file /etc/fstab:

//domain.local/shared /mnt/DFSshare cifs credentials=/.smbcredentials,iocharset=utf8,vers=2.0,gid=1000,uid=1000,file_mode=0777,dir_mode=0777 0 0

Test if the automount works by running this command:

sudo mount -a

If it fails, try manually mounting it in the terminal with this connection string:

sudo mount -t cifs //domain.local/shared /mnt/DFSshare -o credentials=/.smbcredentials

If this works, try rerunning the mount -a command and it should succeed. Reboot and verify it auto mounted

References:

[1] https://wiki.ubuntu.com/MountWindowsSharesPermanently

January 7, 2021

Cross forest Domain Admins GPO

This process will enable the Domain Admins group from one forest to get added to the local Administrators group on servers in another forest with a one way, external forest trust in place.

Here is a basic breakdown [1]:

  • Domain Admins is a Global Group and thus confined to their own domain, so you must nest them inside of a Domain LOCAL group inside of the target forest.
  • Universal groups are used to consolidate groups that span domains inside of a forest, and in my use case, my domain is intentionally in another forest as I want the domains to stay divided.
  • Global groups may contain accounts and other global groups from the SAME domain.
  • Domain local groups may contain accounts, global groups, universal groups from ANY trusted domain, as well as domain local groups from the same domain.

The order of this nesting concept is AGDLP [2]: Account > Global(domain1) > Domain Local(domain2) > Permission.

This new group, in my example is "Group-Server-Admins". Once this is done, we can now create the GPO to push to the target domain's servers local Administrators group. I have applied this at the domain root as I want all of domain1's Domain Admins group to have local Administrator access on domain2's servers.

Path as follows: Computer>Preferences>Control Panel>Local Users and Groups


 

 

 

 

References:

[1] https://docs.microsoft.com/en-us/windows/security/identity-protection/access-control/active-directory-security-groups 

[2] https://en.wikipedia.org/wiki/AGDLP

 

January 6, 2021

Azure SQL permissions

Using Azure console to give IAM roles only gives them console administrative permissions, it does not give those users permissions inside of SQL. You must create these users and permissions manually [1,2].

 


 

 

 

 

 

To view the permissions given to the users:

SELECT DP1.name AS DatabaseRoleName,  
   isnull (DP2.name, 'No members') AS DatabaseUserName  
 FROM sys.database_role_members AS DRM
 RIGHT OUTER JOIN sys.database_principals AS DP1
   ON DRM.role_principal_id = DP1.principal_id
 LEFT OUTER JOIN sys.database_principals AS DP2
   ON DRM.member_principal_id = DP2.principal_id
WHERE DP1.type = 'R'
ORDER BY DP1.name;

To create SQL logins:
#run on master
CREATE USER [user@domain.com] 
FROM EXTERNAL PROVIDER 
ALTER ROLE dbmanager ADD MEMBER [user@domain.com] 
ALTER ROLE loginmanager ADD MEMBER [user@domain.com] 

To give users database permissions:
#run on DB
CREATE USER [user@domain.com] 
FROM EXTERNAL PROVIDER 
ALTER ROLE db_datareader ADD MEMBER [user@domain.com] 
ALTER ROLE db_datawriter ADD MEMBER [user@domain.com] 
ALTER ROLE db_owner ADD MEMBER [user@domain.com] 

The users might not need db_owner, depending on what they are trying to do to the database.

For regular SQL, non Azure logins, use these commands:

CREATE LOGIN readonlyuser WITH PASSWORD = 'Password!'
CREATE USER readonlyuser FOR LOGIN readonlyuser WITH DEFAULT_SCHEMA = dbo;
EXEC sp_addrolemember 'db_datareader''readonlyuser'

# for writer user
EXEC sp_addrolemember 'db_datawriter''standarduser'

#for power user to add tables and edit table design
EXEC sp_addrolemember 'db_ddladmin''poweruser'

References:

[1] https://www.mssqltips.com/sqlservertip/5242/adding-users-to-azure-sql-databases/

[2] https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/database-level-roles?view=sql-server-ver15

December 31, 2020

AWS Secrets Manager password retrieval via BASH script

Basic test script for retrieving a password [1] from AWS Secrets Manager. One can obviously use the password variable for an actual operation instead of echoing the password.

You will need a IAM role with the SecretsManagerReadWrite [2] policy attached to it. Configure the AWS CLI with the user keys.

Initially I attempted to use the native AWS CLI --query option [3], but it spit out the username as well as some punctiation that was not needed. To get around this, I used "jq" to parse the JSON results and spit out just the password.

Modified from the following resources [4-6]

#!/bin/bash

testuser_pw="$(aws secretsmanager get-secret-value --secret-id testsecret
| jq --raw-output '.SecretString' | jq -r .testuser)"

echo $testuser_pw

Here is a screenshot of the 3 stages of testing, with the last line finally outputting just the password as desired.

 

 

 

 

References:

[1] https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html
[2] https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_available-policies.html
[3] https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output.html#cli-usage-output-filter
[4] https://stackoverflow.com/questions/50911540/parsing-secrets-from-aws-secrets-manager-using-aws-cli
[5] https://stackoverflow.com/questions/36452555/bash-script-to-loop-through-output-from-aws-command-line-client
[6] https://stackoverflow.com/questions/44296729/aws-cli-command-inside-bash-script-cant-locate-file

 

December 29, 2020

AWS EC2 Stop/Start BASH script for end users to run on Linux or Mac

You must first provision an IAM user with permissions to perform the action:

{
    "Version""2012-10-17",
    "Statement": [
        {
            "Effect""Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances",
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:<account number>:instance/i-1234567890"
            ]
        }
    ]
}

Configure the AWS CLI with the above IAM user on the end users local computer.

When they run the script, they will need to specify one of three options: status, start, or stop 

If writing this script in Windows, and trying to test in WSL or an actual Unix based system, then you might get this error:

'\r'command not found

Use "dos2unix" on the script file after each edit to modify the newline characters so they are Unix compatible [1].

Modified from source [2]

#!/bin/bash

INSTANCE_ID="i-1234567890"

function _log(){
    trailbefore=$2
    start=""
    if [ -z $trailbefore ] || [ $trailbefore = true ]
        then
        start=" - "
    fi

    printf "$start$1"
}

function run_command (){
    COMMAND=$1
    # note in the original parameter count as new parameter
    QUERY="$2 $3"
    OUTPUT="--output text"
    local result=$(eval aws ec2 $COMMAND --instance-ids $INSTANCE_ID $QUERY $OUTPUT)
    echo "$result"
}

function getStatus(){
    CMD="describe-instances"
    EXTRA="--query \"Reservations[].Instances[].State[].Name\""
    result=$(run_command $CMD $EXTRA)
    echo $result
}

function _checkStatus(){
    status=$(getStatus)
    if [ $status = "pending" ] || [ $status = "stopping" ]
        then
        _log "Current status: $status"
        _log " Wating "
        while [ $status = "pending" ] || [ $status = "stopping" ]
            do
            sleep 5
            _log "." false
            status=$(getStatus)
        done
        _log "\n" false
    fi
}

function start {
    CMD="start-instances"
    _checkStatus
    result=$(run_command $CMD)
    echo $result
}
function stop {
    CMD="stop-instances"
    _checkStatus
    result=$(run_command $CMD)
    echo $result
}

if [ -z "$1" ]
    then
    _log "\n Possible commands: status|start|stop \n\n"
else
    if [ $1 = "start" ] 
        then
        start
    elif [ $1 = "stop" ] 
        then
        stop
    elif [ $1 = "status" ] 
        then
        getStatus   
    fi
fi

[1] https://stackoverflow.com/questions/11616835/r-command-not-found-bashrc-bash-profile

[2] https://stackoverflow.com/questions/42641970/aws-cli-bash-script-to-manage-instances


 

October 14, 2020

AWS VPC Terraform automation

UPDATE: Much improved round 2 here: https://blog.centifanto.net/2021/02/aws-terraform-automation-round-2.html

Ignore this old setup :)

I have wanted to learn Terraform for a awhile now, and finally had the business opportunity last night/today to bury my head in the docs to learn the basics. Was an absolute blast, and now I'm hooked with the idea of automating everything. Plans are to research how to utilize Terraform, Ansible and Pulumi into a cohesive strategy. Stay tuned as I learn and post more. I still need to learn many things, such as securing secrets, importing existing infrastructure, launching differences resources and working with on premise equipment, ect. But the plan is to have pre-built templates for every new client, with minimal reworking of the code via the variables file.

Two brief observations:

  • As mentioned above, Terraform can utilize separate variable files which is fantastic. You will see multiple references to "var.<variable name>".  Those files just reside in the same folder as the config file, and then to use that variables file --> terraform apply -var-file="abc.tfvars" 
  • I really did not want to rework every CIDR statement in the code for every new client, so declaring the VPC CIDR in the variables file enables me to then use cidrsubnet to auto segment. See the link for how it works. https://www.terraform.io/docs/configuration/functions/cidrsubnet.html. I still want to make a loop for the subnets instead of declaring each one, but that'll be for a later date.
Below is my code (50% me, 50% forums/docs). There are many great guides/videos on setting up Terraform and how things work, but feel free to reach out to me on Twitter or LinkedIn if you have questions.  
 
provider "aws" {
# Here you can use hard coded credentials, or AWS CLI profile credentials 
  access_key = "xxxx"
  secret_key = "xxxx"
#profile = "TerraformDemo" 
  region     = var.region
}

# Create VPC
resource "aws_vpc" "Main" {
    cidr_block = var.vpc_cidr
    instance_tenancy = "default"
    enable_dns_hostnames = true
    enable_dns_support = true
    tags = {
          Name = "Main VPC"
    }
}

#Create PRIVATE Subnets
resource "aws_subnet" "PRIVATE-1" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr30)
  availability_zone = "us-east-1a"

  tags = {
    Name = "PRIVATE 1"
  }
}
resource "aws_subnet" "PRIVATE-2" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr31)
  availability_zone = "us-east-1b"

  tags = {
    Name = "PRIVATE 2"
  }
}
resource "aws_subnet" "PRIVATE-3" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr32)
  availability_zone = "us-east-1c"

  tags = {
    Name = "PRIVATE 3"
  }
}
resource "aws_subnet" "PRIVATE-4" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr33)
  availability_zone = "us-east-1d"

  tags = {
    Name = "PRIVATE 4"
  }
}

#Create PUBLIC Subnets
resource "aws_subnet" "PUBLIC-1" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr34)
  availability_zone = "us-east-1a"
  map_public_ip_on_launch = true
  
  tags = {
    Name = "PUBLIC 1"
  }
}
resource "aws_subnet" "PUBLIC-2" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr35)
  availability_zone = "us-east-1b"
  map_public_ip_on_launch = true
  
  tags = {
    Name = "PUBLIC 2"
  }
}
resource "aws_subnet" "PUBLIC-3" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr36)
  availability_zone = "us-east-1c"
  map_public_ip_on_launch = true
  
  tags = {
    Name = "PUBLIC 3"
  }
}
resource "aws_subnet" "PUBLIC-4" {
  vpc_id     = aws_vpc.Main.id
  cidr_block = cidrsubnet(var.vpc_cidr37)
  availability_zone = "us-east-1d"
  map_public_ip_on_launch = true
  
  tags = {
    Name = "PUBLIC 4"
  }
}

#Create IGW and attach to VPC
resource "aws_internet_gateway" "IGW" {
  vpc_id = aws_vpc.Main.id

  tags = {
    Name = "IGW"
  }
}

#Allocate EIP for NAT Gateway
resource "aws_eip" "NATGW-EIP" {
  vpc      = true
  tags = {
    Name = "NATGW-EIP"

  }
}

#Create NAT Gateway in Public PUBLIC 4
resource "aws_nat_gateway" "NATGW" {
  allocation_id = aws_eip.NATGW-EIP.id
  subnet_id     = aws_subnet.PUBLIC-4.id

  tags = {
    Name = "NATGW"
  }
}

#Create peering connnection
resource "aws_vpc_peering_connection" "peer1" {
  peer_owner_id = "xxxx"
  peer_vpc_id = "vpc-xxxx"
  vpc_id      = aws_vpc.Main.id
  peer_region   = "us-east-1"
  tags = {
    Name = "Peer #1"
  }
}

#Create Public Route Table
resource "aws_route_table" "PUBLIC-RT" {
  vpc_id = aws_vpc.Main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.IGW.id
  }

  route {
    cidr_block = "x.x.x.x/24"
    vpc_peering_connection_id = aws_vpc_peering_connection.peer1.id
  }

    tags = {
    Name = "PUBLIC-RT"
  }
}

#Associate Public Subnets to Public Route Table
resource "aws_route_table_association" "PUBLIC-1-PUBLIC-RT" {
  subnet_id      = aws_subnet.PUBLIC-1.id
  route_table_id = aws_route_table.PUBLIC-RT.id
}
resource "aws_route_table_association" "PUBLIC-2-PUBLIC-RT" {
  subnet_id      = aws_subnet.PUBLIC-2.id
  route_table_id = aws_route_table.PUBLIC-RT.id
}
resource "aws_route_table_association" "PUBLIC-3-PUBLIC-RT" {
  subnet_id      = aws_subnet.PUBLIC-3.id
  route_table_id = aws_route_table.PUBLIC-RT.id
}
resource "aws_route_table_association" "PUBLIC-4-PUBLIC-RT" {
  subnet_id      = aws_subnet.PUBLIC-4.id
  route_table_id = aws_route_table.PUBLIC-RT.id
}

#Create Private Route Table
resource "aws_route_table" "PRIVATE-RT" {
  vpc_id = aws_vpc.Main.id

  route {
    cidr_block = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.NATGW.id
  }

  route {
    cidr_block = "x.x.x.x/24"
    vpc_peering_connection_id = aws_vpc_peering_connection.peer1.id
  }

  tags = {
    Name = "PRIVATE-RT"
  }
}

#Associate Private Subnets to Private Route Table
resource "aws_route_table_association" "PRIVATE-1-PRIVATE-RT" {
  subnet_id      = aws_subnet.PRIVATE-1.id
  route_table_id = aws_route_table.PRIVATE-RT.id
}
resource "aws_route_table_association" "PRIVATE-2-PRIVATE-RT" {
  subnet_id      = aws_subnet.PRIVATE-2.id
  route_table_id = aws_route_table.PRIVATE-RT.id
}
resource "aws_route_table_association" "PRIVATE-3-PRIVATE-RT" {
  subnet_id      = aws_subnet.PRIVATE-3.id
  route_table_id = aws_route_table.PRIVATE-RT.id
}
resource "aws_route_table_association" "PRIVATE-4-PRIVATE-RT" {
  subnet_id      = aws_subnet.PRIVATE-4.id
  route_table_id = aws_route_table.PRIVATE-RT.id
}

#Create DHCP Options Set
resource "aws_vpc_dhcp_options" "domain" {
  domain_name          = "domain.local"
  domain_name_servers  = ["x.x.x.x""x.x.x.x"]
  ntp_servers          = ["x.x.x.x"]
}

#Create OpenVPN SG
resource "aws_security_group" "OpenVPN" {
  vpc_id = aws_vpc.Main.id
  name = "OpenVPN Access Group"
  description = "OpenVPN Access Group"
    ingress {
    from_port = 943
    to_port = 943
    protocol = "tcp"
    cidr_blocks = ["x.x.x.x/32"]
    description = "OpenVPN admin"
  }    
  ingress {
    from_port = 1194
    to_port = 1194
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}