Capistrano 3: Change SSH port from default for the first time


We usually change SSH port to different one for some security. Like when provisioning server from official Ubuntu AMI, connecting to a server fails using same ssh_config without specifying port 22 explicitly.

the following quick hack task adds Port line on /etc/ssh/sshd_config then restart sshd. This works on Ubuntu 14.04 trusty. Change ensure_cmd for your system. Note that this adds listening port, not replacing port. More modification on sshd_config will be done via provisioning tool which run after this simple task, so this task only does simple one.

I recommend to make this task runs before deploy task.

task :ensure_ssh_port do
  on roles(:app) do |srv|
    user = srv.ssh_options[:user]
    port = srv.ssh_options[:port] || Net::SSH::Config.for(srv.to_s)[:port]
    unless port
      puts "ensure_ssh_port(#{srv}, #{port}): skip"

    puts "ensure_ssh_port(#{srv}, #{port}): start"

    user_opt = user ? "#{user}@" : ""

    if system(*%W(ssh -T -p #{port} #{user_opt}#{srv} true), err: File::NULL, out: File::NULL)
      puts "ensure_ssh_port(#{srv}, #{port}): ok"
      execute "echo '#{srv} port ensured'"

    unless system(*%W(ssh -T -p 22 #{user_opt}#{srv} true), err: File::NULL, out: File::NULL)
      abort "Couldn't connect #{user_opt}#{srv} with both port 22 and #{port}"

    puts "ensure_ssh_port(#{srv}, #{port}): port 22 ok, changing sshd"

    ensure_cmd = "ssh -T -p 22 #{user_opt}#{srv} \"sudo sh -c 'echo Port #{port} >> /etc/ssh/sshd_config && service ssh restart'\""
    puts "ensure_ssh_port(#{srv}, #{port}): $ #{ensure_cmd}"
    system(ensure_cmd) or raise 'failed to ensure'

    execute "echo '#{srv} port ensured'"
Published at 2015-07-23 03:23:04 +0000 | Permalink

Building AMI from scratch using packer amazon-ebs builder


HashiCorp's Packer is a useful tool to build some VM images for multiple platforms and softwares. Using builders like virtualbox-iso allows building images from scratch; installing systems into empty disk. It supports AWS EC2 AMI, but it doesn't allow building from scratch straightly.

So I've discovered the following 2 ways to build AMI from scratch using packer:

Plan A: use customized builder amazon-scratch

First I developed https://github.com/sorah/packer-builder-amazon-scratch . This allows attaching additional disk on source instance, then creates AMI from additional disk. This works well but this can't be used on Atlas, because it can't install any plugins.

Plan B: boot from tmpfs using user_data

amazon-ebs builder supports user_data for source instance. Ubuntu images have cloud-init that do some initialization process using metadata including user_data.

This plan uses amazon-ebs builder with source EC2 instance booted from tmpfs. The following cloud-config bootcmd injects bash script before /sbin/init run, then reboot instance itself. The script copies all of rootfs into tmpfs, then unmount root EBS, finally kicks /sbin/init on tmpfs to continue boot process. So provisioners can format /dev/xvda and install systems into it.

Also this scripts change sshd listening port from 22 to 122 -- to make sure packer to connect instance after reboot. You have to specify ssh_port to 22 on your packer configuration.

I'm using this trick on https://github.com/sorah/gentoo-build -- this works well for packer, out-of-the-box.

Published at 2015-05-11 03:11:28 +0000 | Permalink

OS X: Determine power source is connected or not in command line

Use ioreg :

ioreg -rc "AppleSmartBattery" |grep ExternalConnected|awk '{print $3}' | grep -q '^Yes$'

(see exit code)

Today accidentally our MacBook for Jenkins Slave has down -- due to power source loss. So I've added cron job to notify my team that power cable is disconnected based on this one liner.


Published at 2015-04-23 00:09:33 +0000 | Permalink

Monitoring fluentd with zabbix


Fluentd has monitor_agent plugin to expose its plugin status (buffer, queue, etc) via HTTP API: http://docs.fluentd.org/articles/monitoring

  type monitor_agent
  port 24220

By using this you can monitor fluentd buffer information with Zabbix user-defined discovery.

require 'json'
require 'open-uri'

PLUGINS_URL = "http://localhost:24220/api/plugins.json"

json = JSON.parse(open(PLUGINS_URL, 'r', &:read))

  data: json['plugins'].map do |plugin|
      "{#PLUGIN_ID}" => plugin['plugin_id'],
      "{#PLUGIN_CATEGORY}" => plugin['plugin_category'],
      "{#PLUGIN_TYPE}" => plugin['type'],

place this in favorite location (at here /usr/bin/fluentd-zabbix-discovery,) and define the user parameters:

UserParameter=fluentd.plugin.retry_count[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .retry_count'
UserParameter=fluentd.plugin.buffer_total_queued_size[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .buffer_total_queued_size'
UserParameter=fluentd.plugin.buffer_queue_length[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .buffer_queue_length'
UserParameter=fluentd.plugin.type[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .type'
UserParameter=fluentd.plugin.plugin_category[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .plugin_category'
UserParameter=fluentd.plugin.plugin_id[*],curl -s localhost:24220/api/plugins.json| jq -r '.plugins[] | select(.plugin_id == "$1") | .plugin_id'

Then you can define template like this: https://gist.github.com/sorah/cfbb39cb750f9bdbdeb2

Note that this plugin creates item using plugin_id, so defining proper plugin_id in fluentd's configuration is highly recommended.

  @id my_favorite_input
  type something
<match **>
  @id my_awesome_output
  type something
Published at 2015-02-01 15:06:22 +0000 | Permalink

Running gocode under dependency manager

nsf/gocode searches *.a object files under $GOPATH/pkg/$GOOS_$GOARCH by default. But dependency manager for Go customizes $GOPATH for its environment on build, so gocode can't find object files under dependency managers, such as godeps, gondler, or gom.

To make working correctly, set lib-path for those tool. Godeps workspace is usually placed at Godeps/_workspace/pkg/$GOOS_$GOARCH. Relative path works well.


gocode set lib-path 'Godeps/_workspace/pkg/linux_amd64:_output/local/go/pkg/linux_amd64'
Published at 2015-01-21 01:21:25 +0000 | Permalink

Restricting traffic to kube-proxy only from trusted networks


As of kubernetes 0.8.0, there's no official way to restrict traffic to kube-proxy.

I'm using the following iptables rule to restrict traffic only from local network and docker containers.

# These rules should be before `-j KUBE-PORTALS-CONTAINER` and `-j KUBE-PORTALS-HOST`
-t nat -A PREROUTING -i docker0 -d YOUR_PORTAL_NET j MARK --set-mark 8820
-t nat -A PREROUTING -s YOUR_LOCAL_NET -d YOUR_PORTAL_NET -j MARK --set-mark 8820
-t nat -A OUTPUT -s YOUR_LOCAL_NET -d YOUR_PORTAL_NET -j MARK --set-mark 8820
# Allow marked packets
-A INPUT -i docker0 -m mark --mark YOUR_FAVORITE_MARK -j ACCEPT

Replace YOUR_LOCAL_NET with your local network (e.g., and YOUR_PORTAL_NET with your kube-apiserver's -portal_net configuration.

Published at 2015-01-11 05:22:53 +0000 | Permalink

Hash#reject regression in Ruby 2.1.1

In Ruby 2.1.0 or earlier, the reject method in any class that inherits Hash returns an object of its own class. However, in Ruby 2.1.1 this behavior has changed accidentally to return a plain Hash object, not of the inherited class.

class SubHash < Hash; end
p SubHash.new.reject{}.class #=> 2.1.0: SubHash 2.1.1: Hash
p Hash.new.reject{}.class #=> 2.1.0: Hash 2.1.1: Hash

(To be exact, extra states such as instance variables, etc aren't copied either. With the release of Ruby 2.1.0 we've changed our version policy, so 2.1.1 shouldn't include such behavior change. )

This regression could potentially affect many libraries, one such case is Rails' HashWithIndifferentAccess and OrderedHash. They are broken; as the reject method now returns a plain Hash instead of HashWithIndifferentAccess or OrderedHash. https://github.com/rails/rails/issues/14188

Why is this happened

Firstly, this is not an expected change. It's an accident due to one missing backport commit into 2.1.1.

This behavior change was originally discussed in bugs.r-l.o#9223. However, it had been rejected for the release of 2.1.0 because it was too late. So, this change was rescheduled for Ruby 2.2.0, with a deprecation warning added to 2.1.0.

Commits around this change are described here, read the following gist for more detail: https://gist.github.com/sorah/9265008

Ruby 2.1.0 contains a constant in C to switch Hash#reject behavior by using #ifdef. Hash#reject will return a plain Hash by setting this C constant to 0. When this constant is set to 1, Hash#reject will return the object of its class with any extra states.

After checking out the 2.1 branch, the revision 44358 changed this constant name, and was backported to the 2.1 branch. However, this commit had leaked one line which changed the constant name. This leak is fixed in revision 44370, but this was not included in the backport to the 2.1 branch. Yes, this is reason of the regression.


So I recommend to build a patched Ruby 2.1.1 with revision 44370 or add this monkey patch to your application: https://github.com/rails/rails/pull/14198/files

As of now, revision 44370 is backported to the 2.1 branch, so this accidental behavior change will be fixed when Ruby 2.1.2 is released. https://bugs.ruby-lang.org/issues/9576

As I wrote above, this behavior change is still scheduled for the release of Ruby 2.2.0. I recommend to fix your code to in order to expect this behavior change. One option is to re-define the reject method to your class like Rails pull#14198 does.

(This article is translation of my Japanese blog with request and help from zzak <3. Thank you for proofreading, zzak!: http://diary.sorah.jp/2014/02/28/ruby211-hash-reject)

Published at 2014-03-10 13:50:01 +0000 | Permalink

Photography workflow (2014 January)

Here's my current setup and workflow to take, develop, and upload photos.

2014 年 1 月現在の写真をとったときのアップロードするまでのフローです。


I won't explain about camera itself in this article.



  • Mount SD card on Mac
  • Move RAW+JPEG photos to ~/Pictures/YYYYMMDD_EVENT-NAME
  • Upload to local file server via SCP and Amazon S3 using Transmit.app.

S3 Setup

Using S3 with Glacier lifecycle option is great choice for backing up.

S3 の Glacier オプションは気楽に Glacier をつかえて便利です。


I choose machine to run Lightroom depending on number of photos. I use MacBook Air up to about 100 photos, but use Windows box for 100+ photos. Because MacBook Air is slow for editing.

MacBook Air と Windows box どちらかで Lightroom を使うんだけれど、それについては写真の枚数できめてる。枚数多いとちょっと処理速度はやくないとやってらんないけど、数十枚なら別に MacBook Air でいいや、という気。

And I import from locally on MacBook Air, but on Windows box, I import from local samba server; both connected with wired ethernet. I've not found any problems editing photos via network storages. I guess this doesn't make sense on wireless network.

Catalog is placed in local disk, because Lightroom doesn't support catalog on network storages. For securing catalog, Lightroom's catalog backup option is set to "each time when Lightroom exits."

MacBook Air は基本 Wi-Fi でつかっているからローカルにファイルおいていじってるけど、Windows box は有線でファイルサーバー (Linux + Samba) につながっているので SMB にファイルを置いて編集してる。wired network なのでそんな困らない。MacBook Air でこれやると破滅する。

After importing to Lightroom, I pick photos to publish or not by "P" and "X" and "U" shortcuts. Lightroom has flagging feature and I love it. As my experience, half of taken photos will be picked to publish.

Then finally, switch to Develop mode and edit picked photos. I don't take time for editing; adjust white balance, tune noise reduction, auto tone and adjust a bit, done.

Lightroom では P,X,U ショートカットでフラグをたてて写真を選定します。だいたい半分くらいになる。その後の現像もホワイトバランスいじって auto tone してちょっと修正する、みたいな事をやる程度でたいした事はしない。


Publishing is just done in Lightroom's Flickr publishing integration, and local drive publishing. I upload published photos to local network storage and AWS S3.

Also, I'm using Flickr for some private photos (just for backing up or to show some of them to friends). Guest pass is nice feature to share privately.

Lightroom の Flickr とローカルの画像書き出し機能でだいたいすませてる。ローカルで書き出した画像は S3 とファイルサーバーに投げて保全してます。Flickr は Guest Pass でシェアするためとかバックアップとして、現像したけど公開しない写真も private 設定で一部あげています。

Published at 2014-01-18 20:23:13 +0000 | Permalink

Looking back at 2013

Looking back at 2013.

2013 年が終わったのでここで振りかえっておこうと思う。

At the statistics

481 GitHub contributions

I've made 481 GitHub contributions (incl. private repositories: 1227) in 2013.

これはさっきとったキャプチャなので数値が変化しているけど、年末にみた Year of Contributions は 481 でした。プライベートリポジトリを含めると 1227。

主に個人で管理しているサーバーの puppet manifest 等が private repository として github.com に入っているのでその辺とか、あとはこっそりとまだ作っている物の中途半端な repository がカウント外。

22182 tweets

In 2013, I'd not tweeted so much at @sora_h account. There're 33756 tweets at other, my private accounts.

2013 年は @sora_h での発言は割とすくなかった気がします。他のアカウントの合算が 33756 とかなので、それでもあわせて 55938 tweets か。そんなもんですね。もうちょっと多い気がしたけど。

3450 pictures

Uploaded 3450 pictures on Flickr. I'm still using E-PL1 which released in 2010, but it seems to be outdated. Thinking about to get new camera in 2014. E-PL6?

And I bought fixed focus lens in May 2013. I've used it for most pictures in 2013: LUMIX G 20mm/F1.7 ASPH.

3450 枚の写真を Flickr にアップロードしたようです。そろそろ E-PL1 じゃ厳しい気がしてきた (ISO とか…) ので、E-PL6 あたりに買い替えたい。

RubyKaigi 2013 頃に単焦点レンズを買って、それ以降はほぼそれだけで過ごしてる気がする。わりと iPhone とかでもそうだけど、ズームいらない。カンファレンスの時くらい。


29 conferences/events

Attended 29 conferences/events/meet-ups. But, I had no talks in 2013. I should, and try to, make something to talk in 2014...

ISUCON (performance tuning contest in 8 hrs) was very interested event in 2013. I want to attend next time.

ざっとカレンダー見て数えたところ、29 個くらいカンファレンス・勉強会に参加したみたいです。 わりといろいろインプットはしたつもりだけど、逆に言えばあまりアウトプットがなく、実は今年は登壇してなかったりします。

なかでも ISUCON は 2013 年に参加したイベントの中でかなり盛り上がれたものでした。今年もあればまた参加したい。

2 gems

Released 2 (tiny) gems.

ちいさいもので以下 2 つをリリースしてました。


28 books, 127 comics





177 amazon orders

Almost books and comics and CDs.

Also I checked my iTunes Library and it shows 1404 tracks (13.9 GB) added in 2013. About +1GB/mo? Looking forward for iPhone with 128GB storage.

いろいろ買ってますね。だいたい本と CD でした。

iTunes ライブラリを眺めたところ、2013 年は 1404 tracks, 13.9 GB の増加とのことでした。一月にだいたい 1GB 増えてるのか。iPhone の 128GB はやく出ないもんかなあ。

2 trips

There were 2 trips in 2013 (Sapporo and Gunma.) I'm planning to go US and Sapporo (again) in this year.

年始の社員旅行を含めると 2 箇所 (群馬、札幌) に遠出しました。今年は US とか行きたいです。あとまた札幌。


I've attended many events and conferences, but, I couldn't publish any big products/libraries in 2013. It doesn't mean I was contributing open source projects. I'll try to contribute or make something this year.

2013 年は割とイベントにでて交流はしていたけれど、成果としてはあまり何かを外に出す事ができていない年でした。いくつか時間を見つけては自分で使うための Sinatra アプリケーション等をつくったりしてたけど、まだ未完成だったり中途半端だった。

かといって、open source 活動ができていると言えばそれもできてない。小さいライブラリとかさえあまり出せてなかった。今年、2014 年はもうちょっと活動したいですね。2013 年は登壇とかできてないしね…

So, 2014

As I wrote above, my open source works in 2013 was too little bit. I try to make something in 2014... and I like to learn another language (Scala? Go? Haskell?)

btw, I guess I have to improve my English skill...

というわけで 2013 年は時間あったのに何もやれなかったなあという感じでした。今年はなにか出して発表できるといいなーと思ってます…

英語力もおわかりのように適当で中途半端なのでそれも精進したいなあというのと、そろそろ Go とか Scala とか Haskell に手をちゃんと出してみたいなあと思いつつ時間がとれてない。


Published at 2014-01-13 06:59:06 +0000 | Permalink

render_to_string doesn't work well in ActionController::Live

render_to_string doesn't work well in ActionController::Live.

Because render_to_string modifies response_body and restores it. But #response_body= regenerates response.stream and ActionController::Live's overridden #response_body= closes response.stream, so response.stream.write won't work after any render_to_string in ActionController::Live.

def render_to_string(*)
  orig_stream = response.stream
  if orig_stream
    response.instance_variable_set(:@stream, orig_stream)

Above code works well as monkey patch to fix this issue.

I requested to pull this to upstream: https://github.com/rails/rails/pull/11623

Published at 2013-07-28 00:54:55 +0000 | Permalink

Deny incoming packets via IPv6 except from link local address on OS X

ip6fw add 63500 allow tcp from any to any established
ip6fw add 63500 allow ipv6-icmp from any to any
ip6fw add 64000 deny ipv6 from not fe80::/64 to any in
ip6fw add 65000 allow ipv6 from fe80::/64 to any

See also: ip6fw(8)

Published at 2013-07-04 14:56:03 +0000 | Permalink

Class Variables and Instance Variables on Class, in Ruby

Do you know problems around class variables in Ruby?

Class variable

You can declare class variables by using @@ for prefix of variable name, for instance: @@foo.


But, class variables can easily overwrite by subclasses. This is based on Ruby specification; class variables can be shared on its subclass.

Class variables are similar with global variables. They're too hard to handle safely.

For usually cases, I can't recommend to use.

Declare Instance Variable on Class object

So then, how we define "class variable," safely?

In Ruby, classes are object. This means you can define instance variable on class.

Scope of instance variables on class are closed within the same class. Thus, they don't effect on subclasses.

(Of course you can use attr_accessor, attr_reader, attr_writer. Example code)

Using from instance

Here are how to use that variables from instance objects.

Use attr_accessor

The simple solution.

Use instance_variable_get

but if you wanted to protect from foreigns, you can use instance_variable_get and private method.

Published at 2013-01-29 02:29:02 +0000 | Permalink