blog.sorah.jp

Capistrano 3: Change SSH port from default for the first time

We usually change SSH port to different one for some security. Like when provisioning server from official Ubuntu AMI, connecting to a server fails using same ssh_config without specifying port 22 explicitly.

the following quick hack task adds Port line on /etc/ssh/sshd_config then restart sshd. This works on Ubuntu 14.04 trusty. Change ensure_cmd for your system. Note that this adds listening port, not replacing port. More modification on sshd_config will be done via provisioning tool which run after this simple task, so this task only does simple one.

I recommend to make this task runs before deploy task.

task :ensure_ssh_port do
  on roles(:app) do |srv|
    user = srv.ssh_options[:user]
    port = srv.ssh_options[:port] || Net::SSH::Config.for(srv.to_s)[:port]
    unless port
      puts "ensure_ssh_port(#{srv}, #{port}): skip"
      next
    end

    puts "ensure_ssh_port(#{srv}, #{port}): start"

    user_opt = user ? "#{user}@" : ""

    if system(*%W(ssh -T -p #{port} #{user_opt}#{srv} true), err: File::NULL, out: File::NULL)
      puts "ensure_ssh_port(#{srv}, #{port}): ok"
      execute "echo '#{srv} port ensured'"
      next
    end

    unless system(*%W(ssh -T -p 22 #{user_opt}#{srv} true), err: File::NULL, out: File::NULL)
      abort "Couldn't connect #{user_opt}#{srv} with both port 22 and #{port}"
    end

    puts "ensure_ssh_port(#{srv}, #{port}): port 22 ok, changing sshd"

    ensure_cmd = "ssh -T -p 22 #{user_opt}#{srv} \"sudo sh -c 'echo Port #{port} >> /etc/ssh/sshd_config && service ssh restart'\""
    puts "ensure_ssh_port(#{srv}, #{port}): $ #{ensure_cmd}"
    system(ensure_cmd) or raise 'failed to ensure'

    execute "echo '#{srv} port ensured'"
  end
end

Published at

Building AMI from scratch using packer amazon-ebs builder

HashiCorp's Packer is a useful tool to build some VM images for multiple platforms and softwares. Using builders like virtualbox-iso allows building images from scratch; installing systems into empty disk. It supports AWS EC2 AMI, but it doesn't allow building from scratch straightly.

So I've discovered the following 2 ways to build AMI from scratch using packer:

Plan A: use customized builder amazon-scratch

First I developed https://github.com/sorah/packer-builder-amazon-scratch . This allows attaching additional disk on source instance, then creates AMI from additional disk. This works well but this can't be used on Atlas, because it can't install any plugins.

Plan B: boot from tmpfs using user_data

amazon-ebs builder supports user_data for source instance. Ubuntu images have cloud-init that do some initialization process using metadata including user_data.

This plan uses amazon-ebs builder with source EC2 instance booted from tmpfs. The following cloud-config bootcmd injects bash script before /sbin/init run, then reboot instance itself. The script copies all of rootfs into tmpfs, then unmount root EBS, finally kicks /sbin/init on tmpfs to continue boot process. So provisioners can format /dev/xvda and install systems into it.

Also this scripts change sshd listening port from 22 to 122 -- to make sure packer to connect instance after reboot. You have to specify ssh_port to 22 on your packer configuration.

I'm using this trick on https://github.com/sorah/gentoo-build -- this works well for packer, out-of-the-box.

Published at

Monitoring fluentd with zabbix

Fluentd has monitor_agent plugin to expose its plugin status (buffer, queue, etc) via HTTP API: http://docs.fluentd.org/articles/monitoring

<source>
  type monitor_agent
  bind 127.0.0.1
  port 24220
</source>

By using this you can monitor fluentd buffer information with Zabbix user-defined discovery.

#!/usr/local/bin/ruby
require 'json'
require 'open-uri'

PLUGINS_URL = "http://localhost:24220/api/plugins.json"

json = JSON.parse(open(PLUGINS_URL, 'r', &:read))

puts({
  data: json['plugins'].map do |plugin|
    {
      "{#PLUGIN_ID}" => plugin['plugin_id'],
      "{#PLUGIN_CATEGORY}" => plugin['plugin_category'],
      "{#PLUGIN_TYPE}" => plugin['type'],
    }
  end
}.to_json)

place this in favorite location (at here /usr/bin/fluentd-zabbix-discovery,) and define the user parameters:

UserParameter=fluentd.plugin.discovery,/usr/bin/fluentd-zabbix-discovery
UserParameter=fluentd.plugin.retry_count[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .retry_count'
UserParameter=fluentd.plugin.buffer_total_queued_size[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .buffer_total_queued_size'
UserParameter=fluentd.plugin.buffer_queue_length[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .buffer_queue_length'
UserParameter=fluentd.plugin.type[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .type'
UserParameter=fluentd.plugin.plugin_category[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .plugin_category'
UserParameter=fluentd.plugin.plugin_id[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .plugin_id'

Then you can define template like this: https://gist.github.com/sorah/cfbb39cb750f9bdbdeb2

Note that this plugin creates item using plugin_id, so defining proper plugin_id in fluentd's configuration is highly recommended.

<source>
  @id my_favorite_input
  type something
</source>
<match **>
  @id my_awesome_output
  type something
</match>

Published at