jump to navigation

LibreOffice, Python3 and AttributeError: ‘NoneType’ object has no attribute ‘supportsService’ April 10, 2015

Posted by Paolo Montrasio in Technology and Software.
add a comment

I had to run a modified version of the famous DocumentConverter.py script on Ubuntu 14.04. It run on 12.04 well but Ubuntu 14.04 comes with Python3 and the Uno library to interface Python with LibreOffice or OpenOffice doesn’t work.

Solution: convert the script to Python3 (exceptions, print, has_key have been changed) then install these libraries:

sudo apt-get install libreoffice-dev libreoffice-script-provider-python python3-uno

and the program will work. If you fail to install them you’ll get the AttributeError: ‘NoneType’ object has no attribute ‘supportsService’ error because loadComponentFromURL won’t be able to read the input file.

Compile your own Ruby and use it with RVM December 26, 2014

Posted by Paolo Montrasio in Technology and Software.
Tags: ,
add a comment

Prompted by the news about how gcc 4.9 makes Ruby 2.1 faster I decided to compile my own Ruby 2.2.0 and pit it against the one coming with rvm. I also want to keep switching between Rubies using RVM. I had to google a little to learn how to do it so I want to share.

rvm install 2.2.0
rvm use ruby-2.2.0
# find out the compilation options
ruby -r rbconfig -e 'puts RbConfig::CONFIG["configure_args"]'
 'optflags=-O2' '--enable-load-relative' '--sysconfdir=/etc'
 '--disable-install-doc' '--enable-shared'
wget http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.0.tar.gz
# important, always compare to the hash advertised at
# https://www.ruby-lang.org/en/downloads/
md5sum ruby-2.2.0.tar.gz
tar xzf ruby-2.2.0.tar.gz
cd ruby-2.2.0
mkdir -p /home/me/compiled-rubies/2.2.0p0
# configure with the same compilation options
# of the standard binary
CFLAGS=-O2 ./configure --enable-load-relative \
  --sysconfdir=/etc \
  --disable-install-doc --enable-shared \
make test
make install
# make it available to rvm as ext-ruby-2.2.0-gcc4.9_O2
rvm mount /home/me/compiled-rubies/2.2.0p0 \
  -n ruby-2.2.0-gcc4.9_O2
rvm list
    ext-ruby-2.2.0-gcc4.9_O2 [ x86_64 ]
 => ruby-2.2.0 [ x86_64 ]
rvm use ext-ruby-2.2.0-gcc4.9_O2

The files in ~/.rvm/rubies/ext-ruby-2.2.0-gcc4.9_O2 will be symlinks to the ones in compiled-rubies/2.2.0p0 so don’t remove that directory.

The point of this post is already made but as a bonus here are the benchmarks of the two Rubies using Antonio Cangiano’s tests.

git clone git://github.com/acangiano/ruby-benchmark-suite.git
cd ruby-benchmark-suite
rvm use ruby-2.2.0 # for the standard one
rvm use ext-ruby-2.2.0-gcc4.9_O2 # for the compiled one
rake # This might fail, see the note at the end

Here are the results: ruby-2.2.0 and ruby-2.2.0-gcc4.9_O2 (YAML), summary (CSV). TL;DR: the compiled Ruby is a little bit faster overall. It’s much faster in a few tests, a bit slower in some others. It’s a difficult choice and it probably depends on what you do.  Please notice all those tests that ended with errors (look at the YAML files). They could make a difference for the overall assessment of which version is faster but I didn’t dig into that issue yet.

In case of failure

Rake could end with a weird syntax error for the compiled Ruby. There are two possible fixes. One is to replace `which rake` with the version from the 2.2.0 binary distribution. The other is to really understand what’s going on. The key is: that rake is a bash script which execs a Ruby interpreter on itself using ruby’s -x switch which strips away the bash script at the beginning. But Ruby doesn’t seem to honour that. No time to investigate any further now…

Running Ruby on Rails tests with a ramdisk backed PostgreSQL December 11, 2014

Posted by Paolo Montrasio in Technology and Software.
Tags: ,
1 comment so far

I’m not a fan of mocking objects in my Ruby On Rails tests so my tests always hit the database, which is PostgreSQL anytime I can make the choice and a customer doesn’t dictate MySQL.

Hitting the DB means that the test suite eventually slows down as tests pile up as the application grows.
It’s been a long time since I wanted to check what happens if I run my tests on a database backed on RAM instead of by a spinning disk. Are they going to be much faster?

Tl;dr No, they run at the same speed.

Creating the DB on the ramdisk

The setup is based on information provided at http://stackoverflow.com/questions/11764487/unit-testing-with-in-memory-database and http://jayant7k.blogspot.com.au/2006/08/ram-disk-in-linux.html

The test system is my laptop, an Ubuntu 12.04, i7-4700MQ CPU @ 2.40GHz, 16 GB RAM, SDD for the OS, HDD for my home and databases. The DB is PostgreSQL 9.3.

Linux has 16 ramdisks already created as /dev/ram* at boot time. Let’s take one and mount it.

mkdir ~/tmp/ram
sudo mkfs.ext4 -m 0 /dev/ram0
sudo mount /dev/ram0 ~/tmp/ram/
df -h ~/tmp/ram/
Filesystem Size Used Avail Use% Mounted on
/dev/ram0 58M 1.3M 56M 3% /home/me/tmp/ram

It’s a tiny disk and it turned out to be barely enough to accommodate my tests but it’s OK for experimenting.
You can make it larger if you need to. http://jayant7k.blogspot.com.au/2006/08/ram-disk-in-linux.html explains how.

We create a DB there now.

cd ~/tmp/ram
sudo bash
mkdir postgresql
chown postgres.postgres postgresql/
su - postgres
/usr/lib/postgresql/9.3/bin/initdb --locale=en_US.UTF-8 -D ~/tmp/ram/postgresql/
mkdir postgresql/log
chown postgres.postgres postgresql/log/

We make it use a different port from the one used by the default PostgreSQL DB on the laptop.

vi postgresql/postgresql.conf

port = 5433

We start the DB and connect to it

sudo -u postgres /usr/lib/postgresql/9.3/bin/pg_ctl \
  -D ~/tmp/ram/postgresql/
  -l ~/tmp/ram/postgresql/log/postgresql-9.3-main.log start
psql -p 5433 -U postgres


Running tests

We edit config/database.yml to use the ramdisk DB

port: 5433

We create the test user and the test db

psql -p 5433 -U postgres
create role testuser login password 'password';
alter user testuser with createdb;
create database myapp_test owner testuser encoding='UTF8' lc_collate='en_US.UTF-8' lc_ctype='en_US.UTF-8';

We create the DB schema

cd the/rails/directory
RAILS_ENV=test rake db:migrate

And finally we benchmark the tests over the two databases.

rake spec:controllers


Finished in 1 minute 5.34 seconds (files took 1.76 seconds to load)
Finished in 1 minute 4.26 seconds (files took 1.75 seconds to load)
Finished in 1 minute 2.07 seconds (files took 1.75 seconds to load)


Finished in 1 minute 7.09 seconds (files took 1.76 seconds to load)
Finished in 1 minute 6.01 seconds (files took 1.72 seconds to load)
Finished in 1 minute 4.68 seconds (files took 1.74 seconds to load)

2 seconds are not worth the trouble. Let’s benchmark the models.

rake spec:models


Finished in 1 minute 36.8 seconds (files took 1.69 seconds to load)
Finished in 1 minute 38.08 seconds (files took 1.72 seconds to load)
Finished in 1 minute 37.9 seconds (files took 1.73 seconds to load)


Finished in 1 minute 38.64 seconds (files took 1.79 seconds to load)
Finished in 1 minute 32.73 seconds (files took 1.69 seconds to load)
Finished in 1 minute 41.89 seconds (files took 1.71 seconds to load)

No difference at all, only a bit more variance in the durations of the HDD tests.

This is my conjecture. The data go first into the OS file buffer, then are synced to the disks. Syncing to ramdisk is faster but if there is enough RAM data is staying in RAM anyway and it doesn’t matter if we’re using a ramdisk or a HDD. Remember: this is a test DB with a handful of data, not a large production DB with high I/O loads.

Let’s stop the DB and change the configuration to do without syncing. If there is no speedup my conjecture should be confirmed.

sudo -u postgres /usr/lib/postgresql/9.3/bin/pg_ctl \
  -D ~/tmp/ram/postgresql/ \
  -l ~/tmp/ram/postgresql/log/postgresql-9.3-main.log stop
sudo vi ~/tmp/ram/postgresql/postgresql.conf
sudo -u postgres /usr/lib/postgresql/9.3/bin/pg_ctl \
  -D ~/tmp/ram/postgresql/ \
  -l ~/tmp/ram/postgresql/log/postgresql-9.3-main.log start

Run the test on the ramdisk again.

rake spec:models


Finished in 1 minute 36.52 seconds (files took 1.71 seconds to load)
Finished in 1 minute 35.45 seconds (files took 1.7 seconds to load)
Finished in 1 minute 36.59 seconds (files took 1.68 seconds to load)

No difference, so my tests didn’t move enough data to make the syncing operations relevant.


You can keep your test DB on a spinning disk and the OS buffering will make it fast.
If you want quick tests you probably have to mock everything and do without the DB.

Alternative: run tests in parallel with the parallel_tests gem.


Could I have done something better to make the ramdisk based DB run faster?

Ruby’s Influence over the Elixir Language October 5, 2014

Posted by Paolo Montrasio in Technology and Software.
1 comment so far

This pictures shows exactly what you’d expect a Ruby conference to be. Don’t you? Wait, it’s not what it looks. I can explain.

Ruby Day 2014 / Lunch Break

Ruby Day 2014 / Lunch Break

That was the lunch break and we had a wonderful sun and a wonderful lawn :-)

That was in Roncade, Italy at the premises of H-FARM.

I was there for the usual stuff: learn new things, meet people I knew and know new people. Plus, my first time at a Ruby conference, to give a talk. A talk about Ruby. No, a talk about a language designed to look like Ruby regardless of the huge differences beneath. This language is Elixir and this is my presentation on Slideshare (sorry for the fonts, some of them didn’t survive the conversion after the upload).

2014-10-23 – Update: we’ve got the video!

The original presentation files (odt, ppt, pdf) with the speaker notes and some tutorials are at http://connettiva.eu/rubyday

Visit also my GitHub repository for a demo Phoenix application (a RoR-like web framework for Elixir) at https://github.com/pmontrasio/phoenix-demo-app

Upgrade a Rails 4 app to Rspec 3 July 27, 2014

Posted by Paolo Montrasio in Technology and Software, Tips.
add a comment

I have a Rails 4 application with Rspec 2. I’m using a mix of should and expect assertions. I wanted to upgrade to Rspec 3 without changing the specs for now. I updated the Gemfile, run bundle install, rake spec and got many errors. Basically most helpers went missing (undefined methods visit, sign_in, root_path, etc., plus anything defined inside app/helpers). Googling around I found a solution for everything but the keys to restore the old behaviour are two.

1) The new rspec doesn’t include helpers based on the directory the specs are stored into. You either define the spec type with stuff like type: :feature or type: :controller or you add


to the Rspec.config block.

2) The should syntax has been deprecated and doesn’t work any more by default. You must enable it with

config.expect_with :rspec do |c|
c.syntax = [:should, :expect]

Minor quirks:

  • You must remove require ‘rspec/autorun’
  • example doesn’t exist anymore. It has been replaced by RSpec.current_example

FactoryGirl and Paperclip: testing content types January 29, 2014

Posted by Paolo Montrasio in Technology and Software.
add a comment

I used Paparclip to add a picture to a model, something I did for years. This time I also added a validation for content types, and this might be a first time for me (I don’t want to grep all the models of all the past projects). The validation is

validates_attachment :picture,
content_type: { content_type: ["image/jpg","image/png"] }

Now I want to test it. I was loading real image files in the objects created with FactoryGirl. This is the code

picture  { File.open("#{Rails.root}/#{%x[ls test-images/*jpg].split("\n").sample}") }

Note that I’m using %x[].sample to randomly pick an image from a directory, but that’s not important.

The code above doesn’t set a mime type and the validation fails. I had to google quite a lot to find the right hints (some solutions have been obsoleted by newer versions of Paperclip and maybe other parts of the toolchain). The solution is

Rack::Test::UploadedFile.new("#{Rails.root}/#{%x[ls test-images/*jpg].split("\n").sample}"), "image/jpg")

which loads the image and sets it’s content type.

Espruino, first impact January 28, 2014

Posted by Paolo Montrasio in Technology and Software.
add a comment

I received my Espruino yesterday and this is the tale of my first impact with it as a pure software developer with no harware skills to talk about.
Conclusions first: it’s going to be fun and I’ll learn many things. But how long it will take? Read on.
Let’s define the baseline. What do I know about electronics? I know how to change a light bulb, how to connect wires to a plug, a socket, a switch. I know that resistors are for heaters or for making light in tungsten lamps. I know that capacitors are to store energy. I know that solenoids are to make transformers or antennas. That’s it.
So, I got the Espruino, unpacked it, wondered at all those wires in the package and the other little boxes (the Espruino board, a relay, two servos, a temperature sensor, lot of LEDs). There is a printed piece of paper with the url to the Quick Start page. I went there and watched the video. Then I watched this one on YouTube which is even simpler. I suggest you watch it too to understand what going on next.
I’m using Ubuntu Linux and I didn’t have the minicom program used in the example but Ubuntu tells me how to install it. First problem solved.
I connected the Espruino to my netbook with my smartphone data cable which has the same connector. Second problem solved.
I type in the commands in the video and switch the LEDs on and of and make them blink at different speeds. Great!
Then I get to 2:55. “For that you need a battery” and I obviusly have many batteries at home, AA or AAA formats, 1.5 V. Yes I also know about V and A, I’m such genius! :-) Unfortunately that means that I also know that plugging in the wrong battery could burn my Espruino. Which battery are they using? They don’t tell it but probably the specs of the Espruino will help me. Furthermore I don’t have any connector that would fit into that socket and I only have a vague idea about where to buy one. Ok, let’s take a note.
A few seconds later into the video they plug a servo into the board. There are two servos in the box I received. I get one and check it. I realize that I can’t plug it anywhere into the board. It’s got a female 3 pin plug and my board has only holes. But the board in the video is different. It’s got a lot of male pins and I realize now that at 0:20 they said “pinheads we soldered on it”. Oh… soldered. It means a need a solder. Guess what? I don’t have one and I have a vague idea of how to use it. I did it a couple of times in my life on objects much larger than these ones. Oh, I think I’ll also need some tin or lead. Let’s take another note. That’s a new skill to learn.
Luckily the box contains many pinheads. I’ll have to cut them into the sizes of the video but I can probably make it. I think they’ve been using the ones that look like the ones in the picture on this page. Ok, I don’t have the battery and I don’t have the solderer, move on to the next demo.
I have no light sensors in my box (I think) but I have a temperature sensor. I know because it looks exactly like the one in this page and in this one (and that’s my relay!)
Ok, I still don’t have a way to plug it into the board (no solderer!) but let’s pretend I can. How do I plug the temperature sensor? Oh, I need a 4.7 k resistor. I know what it is and maybe it’s one of the resistors in the box but there is nothing written on them. I guess there might be a color code and I’ll ask on the forum. Let’s take a note.
But the real question is why do I need a resistor to make the temperature sensor work? You know, I’m a software developer so my first thought is that the sensor has a bug and we must fix it. Couldn’t they sell it so that it works out of the box? OK, I’m pretty sure that the real answer will be that the sensor is perfectly fine and I need the resistor for some reason that I can’t understand given my almost zero knowledge of electronics. I even found page with a picture of the sensor plus the resistor – it looks like the ones in my box. Let’s move on.
Looking at the relay I see some good news: I know how to connect wires to it, it’s like what’s inside a plug :-)
So, let’s recap: no battery, no idea about the voltage, no connector for the battery, no solderer, no idea about why I need resistors but a guess about which one to use. All of that has hinted me again about why hardware is hard and confirmed my old choice to turn to software. But learning things is great so I’ll google around and find solutions. And I’ll stop by an electronics store and buy some tools.

Build Mobile Chrome Apps from the command line December 6, 2013

Posted by Paolo Montrasio in Technology and Software.
add a comment

Mobile Chrome Apps are great news. I’m finally able to create an Android app using the technologies I know best: HTML, CSS and JavaScript. Not a single line of Java. If you don’t have to use Java you don’t need to use an IDE but how to build it without Eclipse or the like?
Here is how to do it using only command line tools. This is useful in many automation scenarios.

git clone git://github.com/MobileChromeApps/mobile-chrome-apps.git
mobile-chrome-apps/mca create eu.connettiva.MyApp
cd MyApp/www
vi config.xml    # write your data
vi manifest.json # write your data
vi background.js # edit it if you're app doesn't
                 # start with index.html

Write your code, then build the Android app.

cd ..
../mobile-chrome-apps/mca build --release
      # or --debug if you don't want to sign the app

The first time you sign you need to generate the keys

keytool -genkey -v -alias MyApp -keyalg RSA \
  -keysize 2048 -validity 10000 -keystore MyApp.keystore

You must sign the apk and align it every time you build it.

jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 \
  -keystore MyApp.keystore \
  platforms/android/bin/MyApp-release-unsigned.apk \
zipalign -f -v 4 \
  platforms/android/bin/MyApp-release-unsigned.apk MyApp.apk

Finally run the app in an emulator. Get the list of available devices with

android avd

and start one of them. Then install the apk in the emulated device with

adb install MyApp.apk # install -r (reinstall)

It’s ready to be used now!

If at any time mca tells you to run mca init to update itself do this:

cd ~/mobile-chrome-apps
./mca init

Logging Skype events October 29, 2013

Posted by Paolo Montrasio in Technology and Software.
add a comment

On a whim I started looking at the advanced notification panel in Skype (Options, Notifications, Advanced View) and found out that we can run a script on events. Wow, that means we can get the details of a Skype event over the command line into any interpreted or compiled program.

My first attempt has been a very simple bash script (I’m on Linux)

echo $* >> skype.log

The name of the script must be typed in into the box in the options panel with some parameter variables. There is little documentation about them but after a little googling and a look at the output of strings /usr/bin/skype | egrep ‘^%’ I got this list: %type, %sskype, %sname, %fpath, %fname, %fsize, %fprogress, %fspeed, %smessage.

I’m calling my script as log-skype %type %sskype %sname %smessage and it’s still very simple:

echo “`date +’%Y-%m-%d %H:%M:%S’`” $* | sed ‘s/ %smessage$//’ >> ~/skype.log

The sed is to remove %smessage from the lines about the events that don’t have a text message.

This lets me build a plain text log of all my messages. Unfortunately it seems there isn’t a variable with the name of the conversation but that’s not a problem.

The next step could be integrating the events with the dbus but there are already other web pages about that.

If you want to inspect Skype’s message databases you might also be interested in Skyperious.

Ruby on Rails Archeology and Restoration June 3, 2013

Posted by Paolo Montrasio in Technology and Software.
add a comment

I’m experiencing a time travel back to 2006. That was when I wrote my very first Ruby on Rails application and this week I had to take it out of the archives and make it run again. I run into a few entertaining hurdles and had to perform some tricks that might be useful to know.

I think it started as a Rails 1.x application and it was upgraded until 2.3.2. I had to upgrade it to 2.3.18 now because of the vulnerabilities discovered among the years.

The application is so old that it didn’t have a Gemfile. Creating one is simple (only 7 gems, the ecosystem was tiny at the time) but the versions of the gems must be picked with care. It runs with nothing newer that rake 0.8.7, rubygems 1.6.2 and Ruby 1.8.7. Follow this tutorial to adapt the application to bundler or you’ll run into troubles in production mode (methods not found, etc).

To install ruby-18.7-p371 I had to specify the rubygem version with rvm install ruby-1.8.7-p371 –rubygems 1.6.2 or run into “There was an error while trying to resolve rubygems version for ‘latest'”.

Thanks to the pg gem it keeps working with PostgreSQL 9.2 (it started with 8.1) but I was using some stored procedures more for fun that for real need (but they were definitely faster than AR). I was accessing the recordset data with connection.exec(query).result. That doesn’t work anymore. Sometimes in the last years the result method has been replaced by values.

I was using gettext  to handle internationalization because Rails didn’t have an internal i18n framework at the time. By the way, I think that gettext is still a better and higher level framework than what we have now, especially because it can parse Ruby sources, collect the new translations and add them to the translation files marking them as untranslated. Anyway, all the world uses I18n, it’s pointless to mix the two of them and gettext was abandoned. But the latest version of the gem is not compatible with Rails 2.3.5+. Big problem with 600+ translations to handle. Luckily there is a commit that fixes that.

gem “locale_rails”, :git => “git://github.com/mutoh/locale_rails.git”, :ref => “13a096f20b”

The application ran on a mongrel_cluster behind an apache reverse proxy. bundle install looks for mongrel 1.1.6 which can’t be found anywhere on the Internet but version 1.1.5 works and is on github. However with mongrel_cluster I’m having some wierd problems in the inner working of my application. Running mongrel without the cluster works fine, so I ended up starting the individual mongrel processes manually. That’s good enough until I investigate the issue and try to move the application to passenger. Should I port it to Rails 3.2? We’ll see what happens but did I mention that the JavaScript front end (some 3000 lines) is based on dojo.js and it’s pub/sub infrastructure? It’s a steep cliff to climb.

The application deals with timezones using the tzinfo gem but apparently the timezone names generated by Rails  are no more what tzinfo expects. As an example, Rails’s select helper generates “London” as a value but tzinfo wants “Europe/London”. I had to implement my own helper along the lines of TZInfo::Timezone.all.map{|tz|”<options value=\”#{tz.identifier}\”>#{tz.to_s}</options>”} Actually it’s much more complicated because I also wanted to display the offset from UTC and sort by it. Maybe I’ll add a gist. I didn’t investigate the helper issue but this should apply to Rails 3.2 as well because I used the latest version of tzinfo. By the way, TZInfo doesn’t the some possibly embarrassing glitches of ActiveSupport::TimeZone, which among the others places Edinburgh into the Europe/Dublin timezone (a different country) or Bern into Europe/Berlin (again, why not Europe/Bern?). It has no Edinburgh and Bern at all but sometimes less is better. Working with timezones is always a messy business.

Eventually it runs.

A few random notes:

It still uses .rhtml files. I forgot they existed. Did you?

It was originally versioned with CVS, not even subversion, but mercifully I moved it to git years ago and forgot I did. It was a nice surprise.

This is Rails 2.3 so you manage the application with ruby script/server and ruby script/console, rails s didn’t do what I’m used to now.

Last but not least it starts up much quickly than the Rails 3.2 applications I’m working on, probably thanks to the small number of gems it uses.


Get every new post delivered to your Inbox.