blog.sorah.jp

Deploying Twingate Connector in Netns

I prefer a remotely-accessible Linux box for development and any terminal-required work and spend most of my time on it. Recently I changed my job, but still have the same preference. At my previous employer, I ran a physical Linux workstation in office, and used Mac laptop and Windows desktop to access via corporate network.

At my new employer, having a Linux box was easy, but accessing it remotely posed a challenge. First, my new office lacks external access capability. Second, I picked up a Windows workstation for desktop environment and placed it in my home; My Linux box runs in Hyper-V instead of on physical hardware. This choice was made to keep expenses minimal (as a new employee) while satisfying my needs: having a Windows desktop in my primary work location (home), plus Mac laptop for remote and office, and Linux box for my dev work.

Fortunately, my employer was experiencing productivity issues on accessing internal resources via AWS Client VPN. I recently introduced Twingate, which I love and which worked great at my previous company for accessing internal resources - including my Linux workstation. So I can now do the same at my new employer by just spinning up a Twingate connector on the Linux box. However, as a connector technically can send packets to arbitrary destinations - would allow free access to admins, I didn't want to allow that to my admin colleagues; even though I trust them.

I achieved a restricted Twingate connector setup by combining netns, nftables, and systemd tricks. Continue reading for deployment details.

Published at

Capistrano 3: Change SSH port from default for the first time

We usually change SSH port to different one for some security. Like when provisioning server from official Ubuntu AMI, connecting to a server fails using same ssh_config without specifying port 22 explicitly.

the following quick hack task adds Port line on /etc/ssh/sshd_config then restart sshd. This works on Ubuntu 14.04 trusty. Change ensure_cmd for your system. Note that this adds listening port, not replacing port. More modification on sshd_config will be done via provisioning tool which run after this simple task, so this task only does simple one.

I recommend to make this task runs before deploy task.

task :ensure_ssh_port do
  on roles(:app) do |srv|
    user = srv.ssh_options[:user]
    port = srv.ssh_options[:port] || Net::SSH::Config.for(srv.to_s)[:port]
    unless port
      puts "ensure_ssh_port(#{srv}, #{port}): skip"
      next
    end

    puts "ensure_ssh_port(#{srv}, #{port}): start"

    user_opt = user ? "#{user}@" : ""

    if system(*%W(ssh -T -p #{port} #{user_opt}#{srv} true), err: File::NULL, out: File::NULL)
      puts "ensure_ssh_port(#{srv}, #{port}): ok"
      execute "echo '#{srv} port ensured'"
      next
    end

    unless system(*%W(ssh -T -p 22 #{user_opt}#{srv} true), err: File::NULL, out: File::NULL)
      abort "Couldn't connect #{user_opt}#{srv} with both port 22 and #{port}"
    end

    puts "ensure_ssh_port(#{srv}, #{port}): port 22 ok, changing sshd"

    ensure_cmd = "ssh -T -p 22 #{user_opt}#{srv} \"sudo sh -c 'echo Port #{port} >> /etc/ssh/sshd_config && service ssh restart'\""
    puts "ensure_ssh_port(#{srv}, #{port}): $ #{ensure_cmd}"
    system(ensure_cmd) or raise 'failed to ensure'

    execute "echo '#{srv} port ensured'"
  end
end

Published at

Building AMI from scratch using packer amazon-ebs builder

HashiCorp's Packer is a useful tool to build some VM images for multiple platforms and softwares. Using builders like virtualbox-iso allows building images from scratch; installing systems into empty disk. It supports AWS EC2 AMI, but it doesn't allow building from scratch straightly.

So I've discovered the following 2 ways to build AMI from scratch using packer:

Plan A: use customized builder amazon-scratch

First I developed https://github.com/sorah/packer-builder-amazon-scratch . This allows attaching additional disk on source instance, then creates AMI from additional disk. This works well but this can't be used on Atlas, because it can't install any plugins.

Plan B: boot from tmpfs using user_data

amazon-ebs builder supports user_data for source instance. Ubuntu images have cloud-init that do some initialization process using metadata including user_data.

This plan uses amazon-ebs builder with source EC2 instance booted from tmpfs. The following cloud-config bootcmd injects bash script before /sbin/init run, then reboot instance itself. The script copies all of rootfs into tmpfs, then unmount root EBS, finally kicks /sbin/init on tmpfs to continue boot process. So provisioners can format /dev/xvda and install systems into it.

Also this scripts change sshd listening port from 22 to 122 -- to make sure packer to connect instance after reboot. You have to specify ssh_port to 22 on your packer configuration.

I'm using this trick on https://github.com/sorah/gentoo-build -- this works well for packer, out-of-the-box.

Published at