Friday, March 15, 2019

Our initiatives to become a more climate friendly family

The climate is debated eagerly and ideas to cut down CO2 emissions are constantly suggested. I have during a period of 3 years been reading about, what we as citizens can do on our own. It took me some time to get started, but during 2018 we did start on this as a family. All listed here can be taken as inspiration:
  1. All bulbs have been replaced by savers.
  2. Old dryer with high power consumption has been replaced by secondhand dryer with low power consumption.
  3. We only eat meat once per week.
  4. We have taken a decision to only buy secondhand, when possible.
  5. We've planed vacation in 2019 not requiring a flight for transportation.
  6. We've started sorting our trash. Yes it is a requirement in our municipality, but we are doing a little extra also making sure that plastic bags and hard plastic objects are delivered for reuse. 
What have you done?

Thursday, November 20, 2014

ArcticWeb 2.0 Released

We (the ArcticWeb team) have released ArcticWeb 2.0. This version contains the new features current forecast, ice concentration forecast, ice thickness forecast, ice speed forecast, ice accretion forecast as well as satellite image and a number of improvements and bug fixes. We decided to release ArcticWeb as version 2.0, because this release is the sum of features being target for ArcticWeb development in 2014. 

New features, improvements and bug fixes in this release are: 
  • Current forecast
  • Ice concentration forecast
  • Ice thickness forecast
  • Ice speed forecast
  • Ice accretion forecast
  • Ability to view NASA satellite images in ArcticWeb
  • A page enabling user to give feedback at any time.
  • Departure date and time is no longer maintained in Route Edit view, but only in Schedule view. It will thus become easier to use the Forecast on route feature.
  • Latest DMI weather forecast is now always shown. 
  • Content of Inshore ice reports are now always shown correctly.

Features worth mentioning from previous releases in 2014 are: 
  • Iceberg charts
  • Inshore ice reports
  • Weather forecasts from DMI
  • Forecasts for routes
  • Import of schedule data from Excel sheet.
  • Import of SAM ChartPilot ECDIS route files
  • Import of Sperry Marine VisionMaster FT route files
  • Reporting to coastal control 


I have previously not used my blog for this kind of updates, but this may change in the future. 

Sunday, September 1, 2013

How to execute the maven-karma-plugin in a CloudBees Jenkins build job (by installing NodeJs and Karma)

I recently faced the task of getting my new JavaScript unit test suite implemented using Karma to execute as part of a Maven build. The maven-karma-plugin was the obvious choice and worked like a charm on my own machine. A few tricks was however necessary to make it work in the Jenkins Maven build job executed on CloudBees. Many thanks to blogs mentioned in the bottom and CloudBees support.

I added a script, which installs NodeJS and Karma if not already installed. This was done by adding a Pre build step of type 'Execute Shell'. Notice that I am also installing phantomjs to execute the tests using a headless browser.

# install nodejs, if using cloudbees (and if not already installed)
curl -s -o use-node
NODE_VERSION=0.10.13 source ./use-node

ARCH=`uname -m`

# install phantomjs, karma
[ -d /scratch/jenkins/addons/node/$node_name/lib/node_modules/phantomjs ] || npm install -g phantomjs
[ -d /scratch/jenkins/addons/node/$node_name/lib/node_modules/karma ] || npm install -g karma
[ -d /scratch/jenkins/addons/node/$node_name/lib/node_modules/karma-junit-reporter ] || npm install -g karma-junit-reporter
[ -d /scratch/jenkins/addons/node/$node_name/lib/node_modules/karma-phantomjs-launcher ] || npm install -g karma-phantomjs-launcher

Note: I also had to add karma-junit-reporter and the karma-phantomjs-launcher plugins to the Karma configuration (karma.conf.js). My plugin configuration looked like this: 

plugins : [ 'karma-jasmine', 'karma-chrome-launcher', 'karma-firefox-launcher', 'karma-junit-reporter', 'karma-phantomjs-launcher' ]

Now the tricky part was that the system path configured in a pre build step was not available in the main build step. I learned that entried in the $HOME/bin folder would be available in the main build job, and I therefore added the following to the bottom of the script (in the pre build step):

[ -d $HOME/bin ] || mkdir $HOME/bin
[ -f $HOME/bin/karma ] || ln -s /scratch/jenkins/addons/node/$node_name/bin/karma $HOME/bin/karma
[ -f $HOME/bin/node ] || ln -s /scratch/jenkins/addons/node/$node_name/bin/node $HOME/bin/node

Next was to execute the maven-karma-plugin. As I had a Maven build job, there was no possibility to configure a file pattern matching the Karma unit test reports. Jenkins would initially therefore not collect the reports and include them in the unit test output. I solved this by first configuring Karma (karma.conf.js) to place the reports in the target/surefire-reports folder:

junitReporter : {
outputFile : 'target/surefire-reports/karmaUnit.xml'

and next configuring the maven-karma-plugin to execute just before unit tests in the Maven phase process-test-classes:

      <!-- execute karma before test phase to let Jenkins collect the target/surefire-reports/karmaUnit.xml -->

Voila. Karma unit tests was now executed on my Jenkins build server in CloudBees and JavaScript unit test reports are available through the Web interface.

I've found inspiration in these:

Friday, August 16, 2013

Installing Node.js, NPM and Karma on Ubuntu 11.10

I've seen many receipes for how to install Node.js and Karma on Ubuntu 11.10, but none as  simple as this.

Download latest Node.js from Unzip the .tar.gz file into some directory (from here called nodeJsDir).

Open a terminal and type in the following commands:

cd nodeJsDir
sudo make install

There after simple install karma by executing the command

sudo npm install -g karma

Karma can now be started executing

karma start

Friday, January 18, 2013

Solved bad_record_mac error for Jenkins/Hudson on Tomcat

I have just updated Lund&Bendsens Continuous Integration server from Hudson version 2.2.1 to version 3.0.0. The upgrade process itself gave no troubles, but all jobs checking out from SubVersion failed with the known bad_record_mac error:

org.tmatesoft.svn.core.SVNException: svn: E175002: Received fatal alert: bad_record_mac
svn: E175002: PROPFIND request failed on '/xxxxxxxx/trunk'
    at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(
    at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(
    at hudson.scm.SubversionSCM$CheckOutTask.checkClockOutOfSync(
    at hudson.scm.SubversionSCM$CheckOutTask.invoke(
    at hudson.scm.SubversionSCM$CheckOutTask.invoke(
    at hudson.FilePath.act(
    at hudson.FilePath.act(
    at hudson.scm.SubversionSCM.checkout(
    at hudson.scm.SubversionSCM.checkout(
    at hudson.model.AbstractProject.checkout(
    at hudson.model.AbstractBuild$AbstractRunner.checkout(
    at hudson.model.AbstractBuild$
    at hudson.model.ResourceController.execute(
Caused by: Received fatal alert: bad_record_mac
    ... 19 more

I tried different solution approaches, but what work was a simple update from JDK 6 to JDK 7. There was a catch though to the solution. We had installed Tomcat as a Windows Service and I updated the service configuration to use JDK 7 as JAVA_HOME. It was however not enough. I also had to update a service jvm configuration, which pointed a the JDK 6 jvm. I therefore ended up updating the Tomcat Windows Service as follows:

TOMCAT_HOME\bin>tomcat7 //US//Tomcat7 --JavaHome="C:\Program Files\Java\jdk1.7.0" --Jvm="C:\Program Files\Java\jdk1.7.0\jre\bin\server\jvm.dll"

More about the Tomcat service scripts can be seen here:

Inspiration to the JDK upgrade came from This issue also indicates that the problem resides in the SubVersion plugin, which is used on both Hudson and Jenkins. The above fix should therefore also hold for Jenkins installation on Tomcat.

Saturday, October 8, 2011

Activating iPad 2 on Ubuntu 11.04 (Natty)

Without to many investigations I recently ordered an iPad 2 for home usage. Upon upstart I to my regret discovered that it requires to be coupled with a iTunes MaC or PC (Windows). After some googling I found that libimobiledevice will do the trick. I have seen many installation guidelines of how to install libimobiledevice, but none solving the error I ran into. Therefore this blog entry.

NB. I am an Ubuntu/Linux newbee and there may be better ways of solving the troubles I had.
Here is what I ended up doing.

I found the blog entry Ubuntu - Activate your brand new iPAD without iTunes and followed the guidelines. Upon the step Compile & Install LIBIMOBILEDEVICE library and tools I however ran into troubles. When executing the command

./ --prefix=/usr

It stopped in the process with the error message 

checking for libusbmuxd... configure: error: Package requirements (libusbmuxd >= 0.1.4) were not met: 

No package 'libusbmuxd' found 

Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. 

Alternatively, you may set the environment variables libusbmuxd_CFLAGS and libusbmuxd_LIBS to avoid the need to call pkg-config. 
See the pkg-config man page for more details.

It took me some time and lots of googling to realize what the problem was. I had the libusbmuxd pakcage installed, but not the libusbmuxd-dev package. (Inspiration was found in the forum entry schwierigkeiten mit "libimobiledevise-1.0.4"). I therefore used Synaptic Package Manager to install libusbmuxd-dev package (version 1.0.7-1ubuntu1~natty1 in my case).

This solved the issue but lead to a totally similar error for the libplist package. I therefore also installed libplist-dev (version 1.4-1ubuntu1~natty1).

The rest of the installation was a sweet deal just following the rest of Nicolas Bernaerts excellent guidelines.

Wednesday, May 25, 2011

Performance optimizing a Hibernate application

A few weeks ago, I was out helping a customer of my company. They suffered from  seriously bad performance on a smaller web application. The web application was build using Hibernate, Wicket, Guice, Warp-persist and more. The architecture of the application was sound. It consisted of 3 layers:
  • A (wicket) Web layer
  • A service layer (business functionality in Guice POJOs, bypassed if not needed)
  • A persistence layer (implemented as Guice/Hibernate DAOs)
The amount of data in the MySql database did not cause any alarms with respect to performance. Well except that performance problems should not be present.

After having been introduced to the domain model I therefore started out using the strategy often recommended by others. I added Hibernate Query Logging by modifying log4j config: org.hibernate.SQL=DEBUG and occasionally also org.hibernate.type=TRACE to se JDBC in- and output parameters.

With Lund&Bendsens Hibernate performance slides in my hand I started searching for problems in prioritized order
  1. Wrong Hibernate inheritance strategy
  2. N+1 select problems
  3. Loading of to big object hierarchies
  4. Long query times because of complex dynamic queries
  5. No caching of rarely modifying objects

1) I quickly found a couple of object hierarchies mapped into the database using the TABLE_PER_CLASS inheritance strategy. These were replaced by SINGLE_TABLE and JOINED which already improved performance on most pages by a magnitude of ~5.

2) The next problem was the well known N+1 select problem in Hibernate applications. As in most Hibernate application Object relations were often mapped using @OneToOne and @ManyToOne resulting in an eager load of the related entity. E.g. A person is related to his company

public class Person{
   Company worksInCompany;

Executing the query "from Person p where p.username = :p" will result in one SQL selecting all persons, where after Hibernate will traverse through the list a execute a query to fetch the company for each person.This may result in N extra queries.

The problem was most often solved by
  • Marking some relations lazy to prevent Hibernate loading the related objects by default, e.g. @ManyToOne(fetch=LAZY)
  • Of these relations fetch some of the eagerly by using fetch join, e.g. instead of "from Person p where p.username = :p" then execute "from Person p fetch join p.worksInCompany where p.username = :p"

2.5) The third problem which we stumbled into was unexpected, but admittedly I should have seen it from the beginning. The warp-persist @Transaction annotation was used on DAO findXXX methods starting and stopping transactions when searching for data. A poor web application architecture resulted in another set of N+1 problems, which could not be solved by tuning Hibernate queries. Re engineering the web application architecture was not an option due to project time constraints. Instead we found that executing findXXX without a transaction resulted in acceptable performance. As always remember to only use transaction when necessary (often only when modifying the database content)!!

3) was solved while working on 2)

4-5) I never got to look at before acceptable performance were reached.

There are also other Hibernate performance tuning techniques, which I haven't even mentioned.

More information :