Search for previous posts

Blog History


The best Task+Note system

A little over year ago I was in great need of a Task+Note system to organize my brain. I was hypnotized, as many are, into thinking Notion would solve all my productivity problems, but found myself just getting really frustrated every time I used it. Whether it was a quick todo, or a seed of an idea that would need to be grown into a full doc, I realized I needed a system that enabled my brain to empty itself quickly, but then transform brain dump information rapidly and seamlessly to useful task or note without friction.

To this point, "productivity experts" claim that Notion shouldn't be used to do all this and that one should instead use multiple apps strung together with or Zapier. I say that is a bad conclusion/strategy: too much gets lost in the shuffle of multiple sources of information. Furthermore I would argue that this is even more relevant with how we use mobile devices: we need something agile to capture and refine information without constant app switching. So, I had absolutely zero interest in multiple apps and I refused to think that no one in the productivity industry had figured out how to solve this problem.

As I decided to start researching, I also grew annoyed with these "productivity experts" in their videos spending 5 minutes clicking around inside of some note app and concluding with "this app is amazing but you need to figure out what works best for you". Or they had a massive video playlist informing you to "just need to download a thousand plugins/templates, spend 6 months fine tuning and then it will be good".

This doc/list is the result of me over the course of many months and late nights endlessly testing note systems. I hope it helps someone besides myself. A particular distinction I would like to reinforce: what I was searching for and needed was a great task management system that was also great at knowledge management. 15 systems (review notes below) failed to fulfill this requirement, but the one that did not fail was Amplenote.

Amplenote - The winner


  • Their Idea Execution Funnel enables seamless transition of daily jots/quick capture/brain dump into tasks or full notes. Absolutely the best part of this system, and the main reason I chose it as my productivity tool. I have absolutely zero interest in multiple apps. I have to use something that enables me to transform brain information rapidly and seamlessly to digital task or note. No other note system on the market does it like this
  • Android quick capture enables creating task or note immediately with zero friction
  • Truly unified task view, another Amplenote feature that no other note system does
    • Industry unique Task Scoring system, enabling dynamic task ranking based on urgency, priority and due date
    • Task view shows only the tasks boxes from notes so you don't have to hunt through notes to find actionable items
    • List of presented tasks can be altered via selecting tags, enabling dynamic prioritizing
  • Designed to work offline
  • Well executed real-time collaboration
  • Being a potential calendar replacement is a huge bonus. Especially because it's not just a calendar replacement, it pulls tasks and calendars together into one platform. Extremely powerful. (However, see Con #1 below though)
  • MFA/2FA/TOTP - a mandatory security feature IMO
  • Fast, clean design, albeit not as flashy as some would prefer. However I personally like their stripped down design
  • Tagging instead of restrictive folders
  • Email to note function
  • You can also forward SMS to the email address - helpful for long texts or pictures you want to forward to a note. (This will still show up as generic "from email" Rich note unfortunately)
  • Publishing option ($10/mo "Pro" plan), can embed page in your own website
  • Single note "Vault" E2EE option ($10/mo "Pro" plan). Best implementation IMO as it allows you to still collaborate/share for non-sensitive notes, but also to encrypt chosen private notes
  • OCR
  • Good dev interaction on feature upvote, Reddit
  • Thorough and updated help docs enable understanding of how Amplenote is intended to be utilized
  • Competitive monthly price for "Basic", no limit on notes.
  • Website features promotion/descriptions are simple and professional


  • No calendar month view on Android (really big frustration for me).
    • Please upvote my feature suggestion. Thank you!
    • What's odd (credit to a commenter on my Reddit post) is that the mobile PWA actually does have the calendar view! Which is great that I at least have that option if I only have my phone, but really annoying that it is not in the native Android app. Please fix this if you're listening Amplenote team!
  • No trash function
    • Please upvote this feature suggestion. Thank you!
    • Per Lucian: "Trash functionality also coming soon!"
  • No shared tags (important for collaboration since there are no project folders to tie notes together)
  • No task hierarchy in Task view, only flat list (bad for projects, as you need to go to the project note for hierarchy)
    • On roadmap per Lucian: "Please upvote the existing suggestion here; we are aware of this one and it's on our to-do list to think of the best ways to implement this."
  • Only a single teal color scheme (I dislike greenish colors)
    • On roadmap per Lucian: "Custom themes are on the roadmap!"
  • No Exchange/WebDAV/IMAP calendar support if you want your external calendar to display your scheduled tasks - only supports Google or Microsoft
    • Please upvote both of these feature suggestions: Exchange and calDAV
  • No backup MFA codes
  • No 24 hour clock
    • Please upvote this feature suggestion. Thank you!
  • No native desktop apps, only PWA
  • No multi/bulk select of notes
  • Cannot @ mention a collaborator. Very important part of projects and tasks
    • Please upvote this feature suggestion. Thank you!
  • No automatic scheduled backup to email
  • Cannot move tasks up/down on Android
    • You can move the tasks, but only by pressing the "up" button above the keyboard. Wish it was more intuitive by just holding on a task and moving with my thumb either up OR down. 
  • No resizable pictures - I missed the tiny button on bottom right of picture
  • Cannot pin notes - Implemented October 2021!
  • No default tag color
  • Unable to edit rich text in note preview, forced to go into note. Especially pertinent without aforementioned lack of task hierarchy in Task view. Need a way to quickly find a parent task and clear child task without having to go inside of note
  • No enterprise/organization functionality e.g. branding, group perms, domain
  • Above average monthly price for "Pro", no lifetime subscription

Notesnook - Appealing, but falls short


  • Works offline
  • Clean design
  • Automatic backup to email
  • Great dev visibility and interaction via Discord
  • Completive monthly pricing
  • E2EE


  • E2EE means no collaboration
  • No unified task view - more of a notes app than a task management app
  • Notebook/folder based structure instead of tags
  • Instability of product, feels beta, customer data durability in question
  • Sluggish interface
  • Slow sync

Upnote - Potential, but too many cons


  • Fast
  • Clean design
  • Completive monthly pricing, lifetime subscription


  • No collaboration
  • "Task view" is a not very useful, just a static list of notes with tasks that you have to go into each note to view tasks
  • Does not seem like it is designed to work offline as company has very few FAQs
  • Notebook/folder based structure instead of tags silos/restricts note's flexibility
  • Runs on Google's Firebase servers (migrating my data away from the big companies)
  • No dev visibility, no company story, I don't even know who the team is behind the application
  • Notebook based structure instead of tags
  • No E2EE option

Supernotes - Potential, but too many cons


  • Unique design/function of linked cards instead of notes inside folders (Similar to Walling - notes below)
  • Collaboration supported
  • Great dev visibility and interaction on forums, feature visibility


  • No mobile apps, does not work offline
  • No unified task view - more of a notes app than a task management app
  • Personal preference: card hierarchy does not feel intuitive to me. Friction/confusion just to find the parent cards, and their restricting of allowing cards to stay in sidebar without opening said card is constricting to me. In fact, Tobias even says "this is annoying" in a YouTube walkthrough video (at relevant time in video) but justifies this by saying it keeps the sidebar from being cluttered. I completely disagree and dislike function restrictions like this: if I want all of my parent cards "cluttering" (IMO this is not true) my sidebar, I should be able to. Especially considering the price they are asking
  • No E2EE
  • Above average monthly price for "Unlimited", "Starter" only allows a measly 40 cards.


  • Notion - Only online. Bloated and slow. Zero task management option without extensive, janky, manually built, sluggish database templates. Database exports not usable outside Notion. No E2EE option. Android data manipulation/editing severely limited in functionality. No MFA/2FA/TOTP
  • Obsidian - Powerful for knowledge management but needs a lot plugins to be useful aside from that. Not designed for real clean task management. No Sync collaboration (unless you consider sharing vaults with someone and encrypt each folder - not ideal)
  • Walling - No unified task view. Unique design/function of walls>bricks>sections but feels beta IMO, I am personally not a fan of the design (too busy), different size and color bricks with image previews distract my brain from actual data. Cannot drag and drop bricks to new walls (have to use menu). No backlinking or tagging to tie different bricks/walls together. Does not seem to work offline very well. Slow Android app. No indication of E2EE
  • Workflowy - Nested bullet based design felt cluttered/messy to me (especially annoying when trying to write a longish note/doc are broken up by endless bullets). Essentially non existent task management (no unified task view, no auto clearing of tasks after checking)
  • Organizedly - Looks very promising, but online only with no apps keeps me from even trying/considering
  • Evernote - I don't trust the company due to past decisions, online only unless paying for the product, too expensive for "Personal" plan, cluttered/sluggish design IMO, duplicate "tasks" and "checklists" are confusing
  • Clickup - Too many animations and color, cluttered and busy design, did not feel intuitive during demo. Relies too heavily on outside tools to be a complete solution. Too much social emphasis (just want a task+notes app, not a social network)
  • Taskade - Too many animations and color, cluttered and busy design, slow, did not feel intuitive during demo. Stupid annoying "Social" sidebar wouldn't stay minimized (just want a task+notes app, not a social network)
  • NimbusNotes - Data captivity, instant nope for me (Export notes only to PDF and HTML, import only from Evernote). No unified task view (only task list per note). Didn't waste anymore time reviewing
  • Standard Notes - Less powerful knockoff of Obsidian
  • Joplin - Ugly beta knockoff of Obsidian
  • Simplenotes - Way too simple
  • Anything stupidly restricted to Apple e.g. Drafts, Craft, Roam, Bear ect I automatically ignore

WAF to ALB to private web server

How to create a protected external website environment.

  1. Create the certificate in ACM that will be used to enable HTTPS on the ALB
  2. Add the verification record for the above cert to make it active and usable.
  3. Create ALB target group and register the web instance. Later you might have to adjust the "path" and the "success codes" depending on the backend web configuration.
  4. IMPORTANT: create the ALB in any public subnet that is in the same AZ as the private web instance. The LB will not function if you miss this. Second subnet can be any public one, unless there are two web instances obviously. If so, you need to adjust the load balancing rules and options appropriately.
  5. Create HTTP listener that forwards to HTTPS
  6. Create HTTPS listener that forwards to the target group
  7. Create SG "ALB-external" for ALB allowing appropriate public IPs
  8. Create SG "ALB-internal" for web instance, referencing the above ALB SG to allow LB to run health checks & route traffic. This might be port 80 or 443 depending on whether private VPC traffic from LB to Web instance needs to be encrypted with self-signed IIS cert. IMPORTANT: If the private traffic needs to be encrypted, then you need to adjust the ALB to point to 443 instead of port 80.
  9. Create the WAF and associate the ALB.
  10. Add appropriate AWS Managed WAF rules, such as "Amazon IP reputation list", "Known bad inputs". These rules are free, unlike the one created below on step 11.
  11. Usually a "US Only" rule should be created.
  12. Add the desired CNAME for the FQDN referenced in the above cert record to point to the ALB A record.

Python SMB connection

Simple script to make an SMB connection with Python

import smb

from smb.SMBConnection import SMBConnection

userID = 'username'
password = 'password'
client_machine_name = 'laptop'

server_name = 'fileserver01'
server_ip = 'x.x.x.x'

domain_name = 'domain.local'

conn = SMBConnection(userID, password, client_machine_name, server_name,  
domain=domain_name, use_ntlm_v2=Trueis_direct_tcp=True)

conn.connect(server_ip, 445)

shares = conn.listShares()

for share in shares:
    if not share.isSpecial and not in ['NETLOGON''SYSVOL']:
        sharedfiles = conn.listPath(, '/')
        for sharedfile in sharedfiles:




AWS EBS performance

I have been meaning to document a couple key items to consider when looking at EBS volume performance. Here is a brief example from 4 volumes attached to an EC2 instance, in which the application in the OS was equally distributing the data across all four volumes. 

Each volume is 40 GB. To determine the IOPs, you simply multiply the volume size by 3, giving each volume 120 IOPs.

To calculate bandwidth, it is IOPs multiplied by I/O size. In the metrics you can see it’s averaging about 100 KB for each write to the volume. So 120 IOPs X 100KB = 12,000 KB  = 1200 MB/s of bandwidth available for each volume. From the graph, we can see the average is 2500 KBs which equals 250 MBs.

Also to note, latency and queue are fine. Queue length is basically how much work is waiting to be done, which you actually don’t want it to be zero as that would mean the volumes stand around with nothing to do. Too much work though and the latency goes up.

So, the per volume bandwidth average of 250 MBs is not maxing out the 1200MBs. However, The IOPs are being maxed out as you can see from the graph, where it keeps bursting above 120 and dropping back down.

Just some brief notes on AWS data flow to EBS volumes, and about optimizing AWS PIOPs (provisioned IOPs) EBS volumes.

Let me attempt to use the freeway analogy to define the terms. 
  1. Application data = amount of traffic leaving their house driveway
  2. Data block size = the width of each vehicle
  3. Compute speed = how fast traffic gets from driveway to on ramp
  4. Queue depth = how many vehicles are currently lined up waiting on the ramp to the freeway
  5. Volume IOPs = how many vehicles can be on the freeway at once
  6. I/O size = the width of each lane
  7. Volume bandwidth in MiB/s = total width of freeway
  8. Latency = time for vehicles to reach destination after entering freeway (this would be a combincation of length of destination
To calculate max IOPs per volume, divide the volume throughput by the I/O size. For example [1], (16 KiB I/O) = 0.015625 MiB. Now take a volume throughput of 1,000 MiB/s and you are given 64,000 max IOPS for that volume.

Also [2], "to determine the optimal queue length for your workload on SSD-backed volumes, we recommend that you target a queue length of 1 for every 1000 IOPS available....for example, a volume with 3,000 provisioned IOPS should target a queue length of 3......Increasing the queue length is beneficial until you achieve the provisioned IOPS.

Average IO size calculation [2]
(Sum of VolumeWriteBytes over 5 minute period) / (Sum of VolumeWriteOps over 5 minute period)) / 1024

25,000,000,000 / 100,000 = 250,000 / 1024 = 244 Average Write Size KiB/op

IOPs calculation [2]
(over 5 minute period VolumeConsumedReadWriteOps) / period in seconds

1,500,000,000 / (60*5) = 5000 IOPs

IOPs * IO size = throughput
5000 * 249856 bytes (244 KiB) = 1,249,280,000 bytes = 1,191 MiB/s


  1. quantity of freeway lanes

Join Ubuntu to AD, give AD group sudo access

Install the following:
sudo apt install realmd libnss-sss libpam-sss sssd sssd-tools adcli samba-common-bin oddjob oddjob-mkhomedir packagekit -y

Make sure DHCP has given the DC IP for DNS. If not, you can set it manually here --> /etc/resolv.conf.

Rename hostname in /etc/hosts to the following: ubuntu1.domain.local ubuntu1

Run this command:
sudo hostnamectl set-hostname ubuntu1

Join the domain and correct OU. Make sure the domain portion for the user string is capitalized otherwise it will fail getting a Kerberos ticket.
sudo realm join -v -U 'administrator@DOMAIN.LOCAL' domain.local --computer-ou='OU=Servers,OU=CORP,DC=Domain,DC=local'

The /etc/sssd/sssd.conf and /etc/krb.conf files will be automatically configured

Switch PasswordAuthentication from "no" to "yes" in /etc/ssh/sshd_config

Modify /etc/sssd/sssd.conf with the following (The last line removes the FQDN specified after the user's home directory to keep it shorter):
use_fully_qualified_names = False
fallback_homedir = /home/%u

Allow AD Group to SSH into server. This is modifying the /etc/sssd/sssd.conf file:
sudo realm permit -g <AD group name>

Add the following to /etc/sudoers to allow sudo access to the group or user:
username@domain.local ALL=(ALL) NOPASSWD: ALL

Allow "mkhomedir" to do it's job when an AD user logs in for the first time. Make sure to insert immediately following the 1st "session" entry:
sudo nano /etc/pam.d/sshd
session required skel=/etc/skel/ umask=0022

Restart both sssd and sshd:
sudo systemctl restart sssd
sudo systemctl restart sshd

Try logging in with a user from the AD group specified above. If you attempt a login with a user outside the group, the server should immediately close the connection.​

ssh domain\username@ubuntu1.domain.local

If you need to automount a DFS share, follow this my other blog post -->


Permissions needed for SQL logins with nested groups

 For #2, you can use this GPO -->


Remote Desktop Gateway setup

This is a quick and easy guide to setup Remote Desktop Gateway in the context of a private LAN environment, so we are not concerned with a publicly signed cert as the traffic will not be exposed to the internet. In this scenario, we are making the DC the target instance and using a self signed cert.

Add the Remote Desktop Gateway service

Create a self signed cert, import this certificate into the Trusted Root Store of whatever source computers you will be connecting from.

Include the gateway itself as one of the servers 

Make sure to start the TSGateway service

Modify the RDP settings to use the gateway. Make sure to uncheck "Bypass RD Gateway server for local addresses". This is key as all traffic will be over RFC 1918 CIDRs.

Optional: create shortcut on desktop to use the above settings

Verify you can see the connection is showing up on the gateway


Automatic, silent S3 bucket mount in Windows

A free and open source process setup using Rclone [1] to mount an S3 bucket  to Windows.

Install Rclone and it's dependency WinSFP:

choco install rclone

choco install winfsp

Configure Rclone and select appropriate options in the wizard (instructions are very simple and should not require me to explain them):

rclone config

Note, you will need an IAM user or EC2 instance profile with the correct S3 bucket permissions. Test your configuration with one of the below commands. You can mount to a folder or as a network drive:

rclone mount <rclone config name from wizard>:<s3 bucket name/folder> %userprofile%\<desired location or name of folder>


rclone mount <rclone config name from wizard>:<s3 bucket name/folder> E: --network-mode

If the config is correct, you will see the bucket mounted correctly and the Rclone service running. 

To make this automatic and silent, put the above command into a .cmd file, download the "Quiet" [2] .exe, and place both of them in the same folder. Create a scheduled task to run at login. Test that the task runs correctly. You can kill it by ending the Rclone service in the task manager. 

As a backup for the end user if the bucket becomes unmounted, create a shortcut to the name of the scheduled task you created and give it a appropriate icon.

This is the path for the shortcut --> C:\Windows\System32\schtasks.exe /run /tn "S3 Mount"





AWS Terraform automation

This is a tutorials starter Terraform playbook broken up into three separate sections while referencing the outputs from other files, enabling you to selectively apply sections. There is the VPC portion, the logging portion, and finally launching the EC2 instances. I will post some more advanced Terraform tutorials at some point.

NOTE: When using this to continually manage resources, it's very important to keep state files safe and in a central place for team access. They can be uploaded to remote cloud destinations, such as Terraform Cloud or a S3 bucket. In my demo example, everything is just kept local.

I also included a picture below of my VSCode setup, as it might give you a helpful visual of how to efficiently work with this tool, as well as understand the moving pieces a little better.

I use Windows, so Chocolatey is utilized to install and update Terraform. 

You can find the files here at my GitHub:

For each region, you will need to copy whole file structure into a separate folder. Would look like this:

  • Terraform
    • East
      • 1VPC
      • 2Logging
      • 3EC2

As mentioned, each one of these can be applied separately. The steps are initialize, plan, and apply. You need to do this in each directory to apply the code:

  • terraform init
    • This initializes the directory that you are in, downloading modules and dependencies for that specific plan.
  • terraform plan -var-file="cidr_region.tfvars" -out test.tfplan
    • This will validate the plan to ensure syntax and variables are correct. It uses .tfvars files to input the unique variables for the CIDR and region. It then spits this into a ready to go plan.
    • When in directories != 1VPC, you will need to specify the path to this tfvars file like this:
      • "C:\Users\%userprofile%\OneDrive\VScode\Terraform\Terraform_demo\1VPC\cidr_region.tfvars"
  • terraform apply test.tfplan
    • Builds your plan
  • terraform destroy -var-file="cidr_region.tfvars"
    • Destroys all of the resources you just built
    • Conversely, you can also comment out items in your config and Terraform will see them as "removed". The next time you run "apply" it will ask you if you want to destroy those commented out resources.
Pay attention to the red boxes in the image below.
  • You will need a file in each directory that is not 1VPC. This data file points the directory back to find the information from the terraform.tfstate file.
  • The terraform.tfstate file contains all of the actual resource ID's. This state file also has hooks into the file that enable you to define any information you want extracted and usable by other config files.


AWS S3 limited bucket policy

    "Statement": [
            "Action": [


Ubuntu automount DFS/SMB share

Install necessary utility:

sudo apt install cifs-utils

Create credentials file called /.smbcredentials for mounting permissions. Use this instead of putting the credentials in the connection string, as later we will be using an automount file and we don't want others to read the credentials:


Restrict the file to root:

sudo chmod 0600 /.smbcredentials

Create the mountpoint --> /mnt/DFSshare

Insert the following string into the file /etc/fstab:

//domain.local/shared /mnt/DFSshare cifs credentials=/.smbcredentials,iocharset=utf8,vers=2.0,gid=1000,uid=1000,file_mode=0777,dir_mode=0777 0 0

Test if the automount works by running this command:

sudo mount -a

If it fails, try manually mounting it in the terminal with this connection string:

sudo mount -t cifs //domain.local/shared /mnt/DFSshare -o credentials=/.smbcredentials

If this works, try rerunning the mount -a command and it should succeed. Reboot and verify it auto mounted