Open Source

October 27, 2015

SSL web site using Let's Encrypt

Open Source

Yesterday I was accepted in the beta program of Let's Encrypt, and I received an email on how to obtain the server-side SSL certificates for this web site.

The setup is pretty straightforward, though you need to pay attention on how you set things up on your web server. I use nginx and this is the configuration I had to add to serve HTTPS requests:

server {
    listen 443 ssl;

    ssl on;

    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;

    # ...


To authenticate your web site, the instructions tell you to place some files in a .well-known/acme-challenge directory. I placed those directly in the root directory that serves my site. Just make sure you have the proper permissions on the directories and files so the web server can serve them, and have the Content-Type set to application/jose+json. On Apache, this is how you do it:

<DirectoryMatch \.well-known/acme-challenge>
  ForceType application/jose+json

For nginx add a config like this inside the server block for your site:

location /.well-known/acme-challenge {
  root /your/htdocs/directory/here;
  default_type application/jose+json;

The SSL certificates are valid for 90 days during the beta test period, but I expect they will extend them to a more usual 1 year once everything works smoothly.

Once you're done setting things up head over to SSL Labs and verify that your SSL web site is properly setup.

Overall a very pleasant experience, and I would say even better than what you get from other SSL certificate authorities.

To sign up for Let's Encrypt's Beta program click here.

Posted by ovidiu at 07:03 PM | Comments (42) |

June 17, 2011

Rsync over ssh: the dreaded "writefd_unbuffered failed to write 4 bytes to socket" error

Open Source

I use rsnapshot to implement a backup solution for the computers in my home. Rsnapshot runs as a cron job on a ReadyNAS Pro Business Edition system with 6 2TB drives inside (a total of 7.4TB in an X-RAID2 configuration). It backs up the data from a Mac Pro using rsync over ssh. I would have used TimeMachine but its inability to reliably run on non-Apple hardware and with volumes larger than 2TB drives me nuts.

Every once in a while, especially when I have some new fresh data to be backed up, I see rsync start up but it mysteriously dies after a short while (in in /var/log/rsnapshot.log). Running the same command in a terminal gives the following error:

rsync: writefd_unbuffered failed to write 4 bytes to socket [generator]: Broken pipe (32)
rsync error: timeout in data send/receive (code 30) at io.c(1530) [generator=3.0.7]
rsync error: received SIGUSR1 (code 19) at main.c(1306) [receiver=3.0.7]

Both the source and the destination machines were using rsync 3.0.7. The command line I was running on the source machine (readynas):

root@readynas:~> /usr/bin/rsync -a -v --iconv=UTF-8 --timeout=180 --archive \
    --compress --delete --numeric-ids --relative --delete-excluded --copy-unsafe-links \
    --rsync-path="/opt/local/bin/rsync" --rsh="/usr/bin/ssh -p 22 \
    -o 'ClearAllForwardings yes' -o 'ServerAliveInterval 60'" \
    root@monster:/Volumes/BigDisk /backup/hourly.0/monster/BigDisk/

I upgraded rsync both on the source machine (the ReadyNAS), as well as on my Mac Pro to 3.0.8, by manually compiling the latest version. However the error still persisted.

Googling around didn't reveal any solution to the problem, which apparently goes back all the way to at least 2008. On one forum a poster suggested removing the compression from rsync and let ssh handle it. This works for small files but it tends to break on the larger files (larger than 15GB in size).

What seems to work best however is remove compression altogether. Another thing I've done is to use rsh instead of ssh. Here is the new command line (note the new paths to rsync 3.0.8 on each machine).

root@readynas:~> /usr/local/bin/rsync -a -v --iconv=UTF-8 --timeout=180 --archive \
    --delete --numeric-ids --relative --delete-excluded --copy-unsafe-links \
    --rsync-path="/usr/local/bin/rsync" --rsh="/usr/bin/rsh" \
    root@monster:/Volumes/BigDisk /backup/hourly.0/monster/BigDisk/
Posted by ovidiu at 02:48 PM | Comments (1) |

December 08, 2010

Transcode AVCHD to MPEG using ffmpeg

Open Source | Photo

I recently bought a Panasonic Lumix DMC-LX5 camera, which records video in both MPEG and AVCHD formats. The camera has some nice improvements over the previous LX3 camera. I really like the extended zoom range, 24-90mm versus the 24-60mm in the old camera. Another feature I really love is the ability to zoom while shooting a movie. The sounds quality improved significantly as well, some videos I took with the old camera had some strange sound issues.

One thing that drove me nuts however was the inability of iMovie 7 or Final Cut Express to import the AVCHD movies I made with the camera. They would import only the first few minutes of the movie and invariably stop after that. The MacOS X software that comes with the camera doesn't handle movie files, and the Windows version sucks badly.

So I was stuck with no way of converting the AVCHD movies to an MPEG file format that can be viewed on MacOS X. Adobe Photoshop Lightroom 3, the program I use to catalog the photos and videos I take, does not understand AVCHD either.

You can however convert the .MTS files from the AVCHD directory created by the camera using the open source ffmpeg tool. To do this, make sure you have MacPorts installed on your computer, then install ffmpeg like this:

sudo port install ffmpeg +gpl +lame +x264 +xvid +mp3 +aac

Once you have ffmpeg installed, you can convert the MTS files like this:

ffmpeg -i <file>.MTS -vcodec mpeg4 -f mp4 -sameq <file>.mpg
Posted by ovidiu at 10:54 AM | Comments (0) |

August 27, 2010

How to safely access a remote IP camera

Open Source

Some time ago I bought a Panasonic BL-C230A Wireless Internet Security Camera so I can monitor my home when I'm not at home. I wanted to be able to get notifications via email when motion is detected at home, and be able to remotely connect to the camera to see what's going on.

Despite the average product rating on Amazon' site, I found the camera to be quite good for what it does, and at its price. It works pretty decent with Chrome or Firefox running on MacOS X or Linux, though you won't get any sound because of the lack of the G.726 audio encoder for the browsers.

The camera comes with a built-in motion detector and you can set it up to email you the images that detected the movement, or you can have it upload those images to an FTP site. It would be nicer if it was able to upload or email short movies with the detected motion, but it doesn't do that. The few open source software monitoring options I investigated did not seem to provide an easy way to do this.

The camera itself comes with instructions on how to setup your home router to allow remote access from the outside of your home network. I wouldn't trust Panasonic, or any other vendor for that matter, with the security of their web server implementation running on their device. Instead of exposing the camera's web server directly over the Internet I decided to use SSH tunneling to allow safe remote access to the camera's web server.

I use an Apple AirPort Extreme Base Station as my router and WiFi access point. There are few nice features of this router that I like:

  • it implements 802.11b/g/n
  • you can set it up to provide two different WiFi networks, one for the trusted computers you own, another one for guests. The computers on the guest network cannot access those on the trusted network. This is great if you have people coming by your house that want to connect to the Internet using your WiFi router.
  • the router has 4 Gigabit Ethernet ports which allow you to connect computers using real copper wires, for faster data transfer between them.
  • it has a pretty flexible interface, allowing you to customize your network the way you want. You can tell for example its built-in DHCP server to always provide a given IP address to a network device based on a MAC address.

The only downside is that it doesn't provide a way to automatically update a DynDNS account when its IP address changes. This however can be easily worked around using ddclient, a small open source program that you can run on a computer inside your home network. The program automatically updates your DynDNS account with the public IP address of your router.

Inside the firewall I have a small Asus Eee Box computer running Ubuntu Linux, which acts as a file server, keeping all my music files so I can access them from wherever I am. I setup this computer with the IP address of and forward port 22 on the router to it. On the Linux box, I only allow SSH connections if the client presents a valid SSH key.

The Linux computer is on the same network with my Panasonic IP camera, which uses the IP address. Since the camera is running a web server on port 80, I can open up a browser and point it to its IP address and I can see the camera's user interface.

Using the above I can now access my IP camera from outside my home. On my MacOS X or Linux laptop, I first setup an SSH tunnel, which simply forwards over a secure, encrypted connection my camera's web server port 80:

ssh -L 8080: -o 'ServerAliveInterval 60' -N -S none

To view my camera, I then open up a browser and go to http://localhost:8080. It works like a charm!

Posted by ovidiu at 11:42 PM | Comments (0) |

June 16, 2010

Arduino Tiny Web Server - part 2

Arduino | Hardware | Open Source

Update (December 30, 2010): Latest version of Arduino TinyWebServer:

Update (December 8, 2010): The below picture of the required Arduino hardware is obsolete. Look at this newer post for updated information on the new hardware.

In part 1 of the Arduino Tiny Web Server I presented some hardware modifications and changes to the Arduino Ethernet shield and the Adafruit Data Logging shield.

In this part I present the Arduino TinyWebServer library or TWS.

TinyWebServer allows you to provide a Web interface to your Arduino-based project. You can get very creative with this, and add a full Ajax web interface to your project. This is possible because it's the Web browser doing all the UI work, while your Arduino board interacts with the hardware connected to it.

I'm using TWS in a remotely controlled projection screen that I'm currently building to replace an existing system. The end goal is to be able to control the projection screen from an Android phone, and let my kids choose to watch movies either on TV or on the big screen. More on this is a later post, until then read below to see this works.

The library has been developed on MacOS X and should most likely work fine on Linux. No guarantees about Windows, but I'd love to hear if it works for you.

As I mentioned in part 1, there are several hardware modifications, as well as software modifications that need to be made. Make sure you have those modifications done to your hardware before proceeding further.

To make things easy, I've decided to bundle the TWS library with the modifications to those libraries, as well as with two additional libraries that TWS depends on: Georg Kaindl's EthernetDHCP and Mikal Hart's Flash library.

After you download and unzip the package, copy the contents of the directory in the directory where you store your Arduino libraries.

The library comes with few examples, look in TinyWebServer/examples. The simplest one is SimpleWebServer, which shows how to write a basic HTTP server with a GET handlers. The more complex one, FileUpload shows how to implement a PUT handler to implement file uploads and write them on the SD card, and how to serve the files in GET requests.

Basic web server

To make use of the TWS library, you need to include the following your sketch:

#include <Ethernet.h>
#include <EthernetDHCP.h>
#include <Flash.h>
#include <Fat16.h>
#include <Fat16util.h>
#include <TinyWebServer.h>

EthernetDHCP is optional, but it makes acquiring an IP address a lot easier if you have a DHCP server in your network.

TWS is implemented by the TinyWebServer class. The constructor method takes two arguments. The first one is a list of handlers, functions to be invoked when a particular URL is requested by an HTTP client. The second one is a list of HTTP header names that are needed by the implementation of your handlers. More on these later.

An HTTP handler is a simple function that takes as argument a reference to the TinyWebServer object. When you create the TinyWebServer class, you need to pass in the handlers for the various URLs. Here is a simple example of a web server with a single handler.

static uint8_t mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };

boolean index_handler(TinyWebServer& web_server) {
  web_server << F("<html><body><h1>Hello World!</h1></body></html>\n");
  return true;

TinyWebServer::PathHandler handlers[] = {
  // Register the index_handler for GET requests on /
  {"/", TinyWebServer::GET, &index_handler },
  {NULL}, // The array has to be NULL terminated this way

// Create an instance of the web server. No HTTP headers are requested
// by the HTTP request handlers.
TinyWebServer web = TinyWebServer(handlers, NULL);

void setup() {

void loop() {

In the loop() function we need the call to the process() to make sure HTTP requests are serviced. If there is no new request, the method returns immediately. Otherwise the process() method blocks until the request is handled.

For a complete working example look in TinyWebServer/example/SimpleWebServer.

Serving files from the SD card

Now that we've seen the basics, let's see how we can extend this web server to serve files stored on the SD card. The idea is to register a handler that serves any URLs. Once the handler is invoked, it interprets the URL path as a file name on the SD card and returns that.

boolean file_handler(TinyWebServer& web_server) {
  char* filename = TinyWebServer::get_file_from_path(web_server.get_path());
  if (!filename) {
    web_server << "Could not parse URL";
  } else {
    TinyWebServer::MimeType mime_type
      = TinyWebServer::get_mime_type_from_filename(filename);
    web_server.send_error_code(mime_type, 200);
    if (, O_READ)) {
    } else {
      web_server << "Could not find file: " << filename << "\n";
  return true;

We can now register this in the handlers array:

TinyWebServer::PathHandler handlers[] = {
  {"/" "*", TinyWebServer::GET, &file_handler },

Note how the URL for the HTTP request is specified. We want it to be /*, very much like a regular expression. However Arduino's IDE preprocessor has a bug in how it handles /* inside strings. By specifying the string as "/" "*" we avoid the bug, while letting the compiler optimize and concatenate the two strings into a single one.

The * works only at the end of a URL, anywhere else it would be interpreted as part of the URL. If the * is at the end of the URL, the code in TinyWebServer assumes the handler can process requests that match the URL prefix. For example, if the URL string was /html/* then any URL starting with /html/ would be handled by the specified handler. In our case, since we specified /*, any URL starting with / (except for the top level / URL) will invoke the specified handler.

Uploading files to the web server and store them on SD card's file system

Now wouldn't it be nice to update Arduino's Web server files using HTTP? This way we can focus on building the actual interface with the hardware, and provide just enough HTTP handlers to interact with it. After we implement a minimal user interface, we can iterate it without having to remove the SD card from the embedded project, copy the HTML, JavaScript and/or image files on a computer, and plug it back in. We could do this remotely from the computer, using a simple script.

TinyWebServer provides a simple file upload HTTP handler that uses the HTTP 1.0 PUT method. This allows you to implement an Ajax interface using XMLHttpRequest or simply use a tool like curl to implement file uploads.

Here's how you add file uploads to your Arduino web server:

TinyWebServer::PathHandler handlers[] = {
  // `put_handler' is defined in TinyWebServer
  {"/upload/" "*", TinyWebServer::PUT, &TinyWebPutHandler::put_handler },
  {"/" "*", TinyWebServer::GET, &file_handler },

Note that the order in which you declare the handlers is important. The URLs are matched in the order in which they are declared.

This is where the headers array mentioned before comes into picture. The put_handler makes use of the Content-Length. To avoid unnecessary work and minimize precious memory usage, TinyWebServer does not do any header processing unless it's instructed. To do so, you need to declare an array of header names your handlers are interested in. In this case, we need to add Content-Length.

const char* headers[] = {

And we now initialize the instance of TinyWebServer like this:

TinyWebServer web = TinyWebServer(handlers, headers);

The put_handler method is really generic, it doesn't actually implement the code to write the file to disk. Instead the method relies on a user provided function that implements the actual logic. This allows you to use a different file system implementation than Fat16 or do something totally different than write the file to disk.

The user provided function take 4 parameters. The first is a reference to the TinyWebServer instance. The second is a PutAction enum which could be either START, WRITE or END. START and END are called exactly once during a PUT handler's execution, while WRITE is called multiple times. Each time the function is called with the WRITE param, the third and fourth parameters are set to a buffer and a number of bytes in this buffer that should be used.

Here is a small example of a user provided function that writes the PUT request's content to a file:

void file_uploader_handler(TinyWebServer& web_server,
			   TinyWebPutHandler::PutAction action,
			   char* buffer, int size) {
  static uint32_t start_time;

  switch (action) {
  case TinyWebPutHandler::START:
    start_time = millis();
    if (!file.isOpen()) {
      // File is not opened, create it. First obtain the desired name
      // from the request path.
      char* fname = web_server.get_file_from_path(web_server.get_path());
      if (fname) {
	Serial << "Creating " << fname << "\n";, O_CREAT | O_WRITE | O_TRUNC);

  case TinyWebPutHandler::WRITE:
    if (file.isOpen()) {
      file.write(buffer, size);

  case TinyWebPutHandler::END:
    Serial << "Wrote " << file.fileSize() << " bytes in "
	   << millis() - start_time << " millis\n";

To activate this user provided function, assign its address to put_handler_fn, like this:

void setup() {
  // ...

  // Assign our function to `upload_handler_fn'.
  TinyWebPutHandler::put_handler_fn = file_uploader_handler;

  // ...

You can now test uploading a file using curl:

curl -0 -T index.htm http://my-arduino-ip-address/upload

For a complete working example of the file upload and serving web server, look in TinyWebServer/examples/FileUpload.

Advanced topic: persistent HTTP connections

Sometimes it's useful to have an HTTP client start a request. For example, I need to be able to enter an IR learning process. This means that I cannot afford TinyWebServer's process() to block while serving my /learn request that initiated the IR learning process. Instead I want the handler of the /learn request to set a variable in the code that indicates that IR learning is active, and then return immediately.

If you noticed the HTTP handlers return a boolean. If the returned value is true, as it was the case in our examples above, the connection to the HTTP client is closed immediately. If the returned value is false the connection is left open. Your handler should save the Client object handling the HTTP connection with the original request. Your code becomes responsible with closing it when it's no longer needed.

To obtain the Client object, use the get_client() method while in the HTTP handler. You can write asynchronously to the client, to update it with the state of the web server.

In my remotely controlled projection screen application, I have another handler on /cancel that closes the /learn client forcibly. Otherwise the /learn's Client connection is closed at the end of the IR learning procedure. Since the Ethernet shield only allows for 4 maximum HTTP clients open at the same time (because of 4 maximum client sockets), in my application I allow only one /learn handler to be active at any given time.

Posted by ovidiu at 01:56 PM | Comments (15) |

June 15, 2010

Arduino Tiny Web Server - part 1

Arduino | Hardware | Open Source

Update (December 8, 2010): The below information of the required Arduino hardware is obsolete and left here for informational purposes. Look at this newer post for updated information on the new hardware.

Arduino TinyWebServer is a small and extensible HTTP server implementation designed to run in a limited amount of space on an Arduino Duemilanove. It uses the Ethernet Shield for network connectivity (from Sparkfun or from Adafruit), and the Adafruit Data Logging shield for storage purposes.

Web pages, images and other content can be copied manually on the SD card or uploaded through the HTTP server. The latter allows you to push new versions of the web server's content without the need to remove the card, which can be a pain in embedded applications.

In the first part I present some changes that have to be made to the hardware used and its accompanying software. Part two presents a small open source software library that implements the Arduino TinyWebServer.

Hardware modifications: Data Logging shield

The hardware shields need few modifications in order to work together. The cards were designed to work independently and use the default pins allocated on the hardware SPI bus (the CS, MOSI, MISO and SCK lines on the 10, 11, 12, 13 pins on an Arduino Duemilanove). When stacking the boards together they'd end up in a bus conflict and they won't work.

The conflict is solved by having the two boards use different CS pins. They can still share the MOSI, MISO and SCK lines, and if it wasn't for a buggy chip on the Ethernet shield, we'd have ended up using only 5 total digital I/O pins for the whole setup. See below for more info.

To make things easy, I chose to use a different CS pin for the Adafruit Data Logging shield: I use pin 9 as the CS pin. For this to work, first make sure you cut out the original trace that goes to pin 10, as in the picture below.

The Data Logging shield board comes unassembled. After you solder all the components on it, run a wire from the CS pin to pin 9, as shown in the picture below.

Hardware modifications: Ethernet shield

The Ethernet shield uses a Wiznet W5100 chip, which has a buggy hardware SPI implementation. In a post on Adafruit's forum, jaredforshey pointed me to this Arduino playground page which points to an easy way to fix this.

The proposed solution disables the chip' SPI part when not in use. This is done by connecting pin 8 to the lower PAD on the board, as shown below. At the same time, make sure you cut the trace leading from pin 8. This bug ends up costing us another pin, for a total of 6 I/O pins for the whole setup.

Software modifications: Fat16 library

To read and write files on an SD card, we need to be able to access a file system on the SD card. There are two main file systems used on SD cards: FAT16 and FAT32. The main differences between them are the maximum card sizes supported and more importantly, file naming conventions. FAT16 allows only the old 8.3 DOS file format and cards up to 2GB.

Arduino supports both file systems on SD cards using either of these libraries: Fat16 or SdFat. For all its limitations, FAT16' library is smaller that FAT32, so I decided to go with it.

Our Data Logging shield uses pin 9 as the CS pin. The FAT16 library assumes the CS pin used in pin 10, so we need to modify that in the code. For Arduino Duemilanove, the definition of SPI_SS_PIN in SdCard.h needs to change from 10 to 9.

Software modifications: Arduino's Ethernet library

The Ethernet library shipped with the Arduino 018 package has a bug. In the Client class in Client.h, the read() method does not differentiate between an 0xFF byte and the Ethernet hardware not having data available. This is not usually a problem if all you serve through the Web server are text files, including HTML. However for any binary file, including images, zip files etc. this however is a problem.

To fix this problem, I've added two more methods to the Client class:

  int read(uint8_t* ch);
  int read(uint8_t *buf, size_t size);

The first reads a character and puts its value at the address pointed to by ch. The method returns 1 if it succeeded reading a character, 0 otherwise (as when there is no data available). The second method fills in the value of buf with as many characters as it can, up to size. It returns the number of characters it was able to read, or 0 if none were read. Here is how they're implemented:

int16_t Client::read(uint8_t *ch) {
  if (!connected() || !available()) {
    return 0;
  return recv(_sock, ch, 1);

int16_t Client::read(uint8_t *buf, uint16_t size) {
  uint16_t i;
  for (i = 0; i < size; i++) {
    if (!read(buf + i)) {
  return i;

The second change to the Ethernet library is in utility/spi.h, to fix the hardware bug with the Wiznet chip. This change is described on the Arduino playground page.

Posted by ovidiu at 03:31 PM | Comments (2) |

April 01, 2004

Setting up ssh-agent on Windows XP

Emacs | Open Source

As you probably know already, ssh-agent is an easy way to enter the passwords for your private SSH keys only once per session. On Linux and Unix systems, when using X-Windows, it is very easy to setup ssh-agent as the parent process of your window manager. In fact most of the Linux distributions start-up the window manager this way.

The way ssh-agent works is by setting up two environment variables, SSH_AUTH_SOCK and SSH_AGENT_PID. The first is used to communicate the location of the Unix socket domain on which ssh-agent is listening for requests. The second is used to identify the Unix process id of ssh-agent, so it can be killed by ssh-add -k.

These environment variables have to communicated to every process that wants to use ssh later on, so ssh can connect to the ssh-agent process and fetch the decrypted private keys. In the Unix parent-child process model, this works just fine. The ssh-agent does the work of creating the Unix socket domain and then forks a child process. In this process it first exports the two environment variables above, then exec the process - the window manager for X-Windows. This way all the processes that inherit from it will have these environment variables available.

On Windows this is not possible, since there is no way to interpose some other process before the window manager. This of course, assumes the same parent-child relationship of processes as in Unix. The alternative is to always start ssh-agent on some well-known socket. Below, I assume you use Cygwin, an excellent free-software Unix emulator for Windows.

There are few things you need to do. First in your Windows home directory (usually c:\Document and Settings\yourusername, make sure you have a .bash_profile that reads:

. ~/.bashrc

Then create a .bashrc file in your home directory, and add to it the following:

export SSH_AUTH_SOCK=/tmp/.ssh-socket

ssh-add -l 2>&1 >/dev/null
if [ $? = 2 ]; then
# Exit status 2 means couldn't connect to ssh-agent; start one now
ssh-agent -a $SSH_AUTH_SOCK >/tmp/.ssh-script
. /tmp/.ssh-script
echo $SSH_AGENT_PID >/tmp/.ssh-agent-pid

function kill-agent {
pid=`cat /tmp/.ssh-agent-pid`
kill $pid

Next, go to the Start menu, "Control Panel" -> "System" -> "Advanced" -> "Environment Variables" and add a new variable SSH_AUTH_SOCK, whose value should be /tmp/.ssh-socket. Hit OK to make the change persistent.

What happens next? The first time you open a bash terminal, an ssh-agent process is going to be automatically created. This process will listen on the Unix socket domain /tmp/.ssh-socket. Run ssh-add at the prompt to enter the password for your private key(s).

Now when you open another terminal, that will share the same ssh-agent process because of the SSH_AUTH_SOCK definition. Running ssh or any other command that uses ssh underneath will work without having to enter the password for your keys.

It will also work if you run a cygwin-ified version of XEmacs. Tramp, CVS or any other Emacs package that uses ssh will work just fine now.

The only requirement is for these programs to be cygwin-ified, otherwise the sharing described above doesn't work.

Posted by ovidiu at 11:35 AM | Comments (2) |

March 02, 2004

Virus attack on Apache committers

Open Source
Moon after storm - early morning Mount Hamilton, California

Today I started to receive a flood of email messages like this:

Date: Tue Mar 2, 2004  6:21:20 PM US/Pacific
Subject: E-mail account security warning.
Attachments: There is 1 attachment

Hello user  of e-mail server,

Our main mailing server will  be  temporary  unavaible for next two days, 
to continue receiving mail in these  days you have to configure  our free
auto-forwarding service.

For details see the  attached file.

The Management,
    The team
<Text Document.pif> attachment

This obviously looks fake, but they got me thinking after a discussion I had not long ago with Steven. Somebody is obviously targetting Apache committers. Perhaps to gain something more than simply placing a new virus on somebody's computer.

If you're an Apache committer, receive such a message and you're on Windows, be aware!

Update: I just got another message at my email address. So it's not only Apache. The interesting thing is whoever is behind this, is filtering the email addresses and is carefully constructing email messages very targetted to a restricted set of people. It's the first time I see this happening.

I wonder when such virus email messages will be customized on a per-user basis :(

Posted by ovidiu at 08:17 PM |

January 21, 2004

SCO attacks GPL

Open Source
Yosemite on fire

SCO has decided to attack GPL by drafting this letter to Congress. (via The Register).

This attach is remarcably similar to Craig Mundie's attack on GPL and Open Source, from almost two years ago.

All of the usages of "free" in both letters are associated with money, not with freedom. In fact such freedom is deemed dangerous to our national economy and security. I hope not to see free software/open source developers labeled as terrorists.

Both letters completely ignore the fact that many companies make money by extending or building on top of these free software projects, while at the same time playing nice with respect to the communities involved. They also don't mention the innovations made possible by exactly this model: how about GCC, GDB, Emacs, Perl, Python, PHP, Apache, and many others?

Posted by ovidiu at 10:49 PM |

November 19, 2003

ApacheCon 2003 slides

Cocoon | Open Source

I've uploaded the slides for my ApacheCon 2003 session on the Cocoon control flow. The presentation describes the work I've done in Apache Cocoon to use continuations in programming Web applications.

I was looking at a similar presentation that I've done in November 2002 at Cocoon GetTogether in Ghent, Belgium, and I noticed with great surprise that all of the items on the todo list were implemented since then. Quite impressive! Especially since all of the items on the todo list were contributed by people other than me ;)

Posted by ovidiu at 10:43 PM |

November 17, 2003


Open Source

I arrived in Las Vegas last night, after the plane was delayed in San Jose for about 5 hours. Had I known that, I would have taken a later flight. Oh well...

Around 9pm I met with Steven, Bruno, Gianugo, Stefano, Gregory ?, and Pier. We had dinner at a spanish restaurant in Venice - the fake one of course.

I was pleasantly surprised to meet Steven again after last year's Cocoon GetTogether. He's a really great guy, last year we didn't have much time to talk. We discussed at great length about the flow engine, especially about the Rhino engine.

Today I woke up at 4:20am and headed to Valley of Fire, about an hour drive NE from Las Vegas. I spent a great deal of time driving around and taking some shots very early in the morning. The weather was cloudy very early in the morning, but around 7am the clouds broke up. I went on few small hikes through few red-rock canyons, and finally headed back around 12:20pm.

I'm now attending Stefano's talk on the dynamics of virtual communities. He's presenting his work on Agora, a tool for visualizing the links within the Apache community. Interesting stuff.

Next session I want to attend is Steven's introduction to Cocoon. He decided to stay at the hotel, rather than coming with me in Valley of Fire, to polish his presentation. In retrospective, I think he made a good decision, I'm very tired after getting only 4 hours of sleep.

Posted by ovidiu at 02:32 PM |

October 29, 2003

mod_jk2 problems

Open Source

Looking at the logs last weekend I noticed several errors caused by mod_jk2. A friend of mine also pointed out some weird error messages showing up when accessing various servlets. These problems were totally random, but were showing up when the servlet was accessed rapidly. The error messages correlated with the log file entries indicated problems in the communication between the web server and the servlet container:

[error] workerEnv.init() create slot epStat.14 failed
[debug] ../../common/jk_worker_ajp13.c(638): ajp13.getEndpoint(): endpoint creation ... endpoint:15 failed

These errors happened with the latest mod_jk2 compiled from source. I ended up installing mod_jk instead, and all these errors went away.

Posted by ovidiu at 08:46 AM |

September 15, 2003

ApacheCon 2003 speaker

Cocoon | Open Source
ApacheCon 2003 speaker

I just found from Matthew's and Steven's blogs that ApacheCon is open for registration. It looks like my session on Cocoon control flow was accepted! Carsten, Stefano and Steven have sessions on Cocoon too, which makes things very interesting!

Posted by ovidiu at 11:11 PM |

July 16, 2003

No more Netscape

Open Source

Ugo Cei: Netscape is dead, long live Mozilla.

With the advent of Safari on Apple's MacOS X and IE on Windows, it looks like Mozilla's main target audience, at least from a consumer perspective, is going to be Linux only. This is really unfortunate, I was really hoping for a sequel of the browser war with a different ending than the first part.

Hopefully one or more commercial organizations are going to pour in some money to support further development of Mozilla. Otherwise larger adoption of the browser might be hindered by perceived lack of support.

Posted by ovidiu at 02:31 AM |

July 08, 2003

CSS development with Mozilla

Open Source

Simon Willison has a very interesting Weblog entry about using CSS bookmarklets to speed up Web app development.

The most interesting ones I found were the edit styles and ancestors. You first have to bookmark the links on Simon's page. Then visit a site whose CSS style you want to develop. Now go in your bookmarks and select edit styles and ancestors bookmarklets. The first one will pop up a window with the CSS styles of the current page. The second one acts via side effects by showing you the DOM hierarchy for the HTML element the mouse is currently over. You can change the style of the Web page interactively in a very easy way.

Posted by ovidiu at 11:04 PM |

June 11, 2003

JavaOne - compiling programming languages to the JVM

Java | Open Source
Per Bothner explaining how Kawa works.

Per Bothner's talk on Kawa is about to start. Per worked on various things over the year. He used to work for Cygnus, before being acquired by RedHat; he worked on GCC and many other tools.

17:16 It started. Apparently Per is the only presenter not affiliated with any company, which an almost unheard of at JavaOne. The attendance is pretty light, it's either too late or people don't care about languages other than Java.

What do you do when you need a higher language than Java? Well, you can write an interpreter. However if you do repetitive computations, it can get pretty slow. Another approach is to compile the program in your language to Java source code.

The best approach is to compile directly to in-memory Java bytecodes. Per makes the interesting assertion that bytecodes are more general than Java source: you actually have goto statements.

Kawa was written while at Cygnus in 1996 and is a GNU project, with a more liberal license than GPL. Kawa can be run interactively from the command line, it can be compiled to a program file. Languages implemented in Kawa: Scheme, XQuery, XSLT, Emacs Lisp etc.

Short introduction to Scheme, an "impure" functional language because of assignment. You run Kawa by doing java kawa.repl and you get the interactive prompt. It supports big integers.You can write applets in Scheme's Kawa.

Another language supported is Common Lisp. Guy Steele was instrumental in the Scheme, Common Lisp and Java languages.

Emacs Lisp: "Emacs is still the most powerful text editor". Kawa compiles Elisp to Java bytecodes. Goal is modern Emacs re-implementation that can efficiently use existing Emacs packages. Uses Swing to represent and display text. A nice JEmacs screenshot, unfortunately not many people actually contribute to it.

XQuery is a very high-level language used for querying, combining and generating XML-like data sets. It is a superset of XPath. Kawa supports XQuery with Qexo, which is missing some features, but still very useful. Some example of XQuery to generate an HTML fragment: it uses HTML markup and XQuery syntax to generate the output page. The example can generate XHTML 1.0, HTML, Scheme from the same file. The example XQuery program can be compiled to a Java class with a main, or to a servlet which can be deployed on a servlet container.

XQuery can be considered as an alternative to JSP. An XQuery program can also be compiled to a CGI program, not very useful these days however. You can embed the XQuery engine in a Java program and take advantage of its power.

Next language shown is XSLT. The Kawa implementation compiles an XSLT stylesheet into a Java class. The project is incomplete, but it's a useful example.

BRL is Beautiful Report Language, a template language much like JSP. Instead of embedding Java, you embed Scheme. KRL - Kawa Report Language - is Per's implementation. The language uses square brackets to embed Scheme code. You can embed such code within HTML tags.

Nice is a strongly typed language with multi-methods, parametric types, anonymous functions, tuples and multiple implementation. KRL and Nice were both written by other people than Per.

Implementation. Each language is a subclass of Interpreter. Each Interpreter uses a Lexer to parse an expression or a program file. The result is an Excpression instance. There are many subclasses of Expression. Once you have an Expression object, you call the compile() method to compile the script. This method takes two arguments, a Compilation object for managing the state, and a Target object for specifying where to leave the result, usually the JVM's stack.

The implementation uses the gnu.bytecode package written by Per for handling bytecodes and .class files: code generation, reading, writing, printing and disassembling. This is a library for dealing with the very low level bytecodes. Apache has the equivalent BCEL, but Per claims gnu.bytecode is more efficient because it doesn't generate a Java object for each bytecode being outputted.

In summary, Kawa includes a good compiler and useful libraries. The Scheme and XQuery languages on Kawa are the most popular languages. The license is either GPL or a more liberal license that allows you to include Kawa in a commercial application if you don't do any modifications to the original code. If you do such changes, you are required to submit them back to Kawa.

Kawa is available at, Qexo could be found at

Questions. XQuery was started in summer 2001, and is still in works. Per works on it part-time, he's day-time job is working for Apple (as a contractor at Apple - working on GCC?). Kawa's Scheme is not tail-call elimination. Writing parsers with Yacc sucks, Per prefers writing them by hand. GCC is replacing the Yacc parser with a hand-written descendent recursive parser. Kawa is an optimizing compiler, sort of. It doesn't do common subexpression elimination, it uses a simple minded register allocation. Errors generated at runtime will have an exception stacktrace that refers to the original source file.

Posted by ovidiu at 06:15 PM |

November 04, 2002

Bean Scripting Framework finally at Apache

Java | Open Source

Chuck Murcko wrote me to say that BSF is about to finally transition from IBM to Apache Jakarta! The mailing lists are up, although no Web archives yet; to register yourself go to the mailing list Web page. A new 2.3 release of BSF should become available once the Jakarta BSF Web site comes up, which should be up any time now.

I'm working on an MVC Web application framework which uses scripting languages supported by BSF as an option to write the Controller. BSF is a central piece in it, that's why I'm so keen on seeing it healthy at Apache. I am also planning to use some AOP patterns to provide extensibility to this framework. More on this as code becomes available.

Posted by ovidiu at 03:49 PM |

September 27, 2002

Creating applications with Mozilla

Open Source

Brett Morgan:

While mooching through Creating Applications with Mozilla I noticed something very cool in chapter 12. Remote Mozilla Applications - where a mozilla application is pulled at run time from a web server.

This is really cool! I still dream of an RSS aggregator integrated in Mozilla, which has the ability to subscribe to Weblogs you visit while browsing. The current process is too cumbersome, copy the RSS feed URL and manually enter it in the RSS aggregator. You almost forget to do it.

Now that I switched to NetNewsWire, I was thinking I could write a simple AppleScript and invoke it from Mozilla to subscribe to the RSS feed in NetNewsWire. A specialized Mozilla application would be able to capture the URL and invoke the AppleScript, but NetNewsWire is not AppleScript enabled :(

Posted by ovidiu at 04:32 PM |

September 23, 2002

Copyright and licenses

Open Source

There was a lively discussion on few days ago about copyright and licenses. I post this reply to Nicola Ken Barozzi, as I think is of a greater interest.

When we are talking about software, no matter is free software/open source or proprietary, there are two aspects of it we are interested in. The first one is the copyright holder, and the second one is the license.

The copyright holder is the person or organization who holds the rights for the code. The copyright holder decides what is the license the code should have. It can even release the code under two or more licenses. For example he/she can release the code under an open source license, and at the same time release it under a commercial, proprietary and more restrictive one. There are many reasons one can do this, I don't want to get into this right now.

The fact that you own the copyright allows you to release the code under any license you want. That's why in the past people where reluctant to give away they copy rights to organizations such as FSF. One example is Linus with with his baby, Linux. He chose to keep the copyright for himself, and let others contribute code to Linux without having to assign their copy rights to him. Linux is effectively owned by hundreds of people or organizations. I think this works marvelously: nobody can decide to make their piece of code proprietary and use in a closed project: if they do it, they have to use that code outside the context of Linux, which in many cases is useless. GPL prevents them from incorporating other people's code in their closed proprietary project, unless they obtain the approval of the other copyright holders to do this, or the whole product is released under GPL. GPL allows you to do this, and such it has a great advantage over any other free software/open source license.

If you're the copyright holder, you can still release your code under a proprietary license, even if it's also released under GPL. This is the case with SGI's XFS filesystem, which is a proprietary piece of code still in use in SGI's Irix operating system. The fact that SGI is the copyright holder allows them to do this. What they cannot do is take other people's contributions to XFS, released by them only under GPL, and incorporate them in their proprietary code (You can still do it if the copyright holder releases that code under your own proprietary license). This is very tricky, so the motivation for you as a copyright holder to open source the code in the first place must be clearly made. This is usually done with mature projects, which can only marginally benefit from other people's contributions. What you get instead from the community is more exposure, in terms of user testing and, of course, a lot of marketing visibility.

To alleviate the issue of not being able to incorporate other people's changes in you code, various organizations came up with their own licenses. One of the most well known such license is Mozilla Public License. This license explicitly states the code must remains open source, no matter what changes another organizations make to it. This has the great advantage the code under such a license can be incorporated in any proprietary projects by anybody in the world, and that any changes made to it are published under MPL, thus are open source as well. Unlike GPL, MPL does not restrict in any way the license of final product you're incorporating the code into. This is a great advantage for enterprises, since they are not restricted in any way.

LGPL is very similar to MPL, but it enforces you as a product vendor not only to include the libraries or jar files of the LGPLed product, but also to include your own libraries used to generate the final product. The reason for this is to preserve the rights of the LPGLed code, which can be modified and re-linked against the proprietary libraries to obtain the final executable.

The last license I'm discussing about here is the Apache Public License, a variation of the well known BSD license. Licenses in this category allow anybody to take the code released under such a license, modify it and incorporate it into their proprietary project, without any restriction. The license does not prevent you from doing this. This type of licenses are very beneficial to companies like Microsoft, since they can benefit from the work of thousands of developers without contributing anything back.

Being a copyright holder allows you to release the code you hold the rights for under any combination of licenses. Giving away this copy right, you effectively loose the ability to incorporate it into your or somebody else's proprietary project , under a more restrictive license. FSF however gives you back such a right, once you assign the copyright to them, while ASF does not.

Posted by ovidiu at 11:25 AM |

September 10, 2002

Mozilla as a Web services platform

Open Source | Web services | Weblogs

Salon has an interesting article about Mozilla as platform for developing applications [via Slashdot].

It's so refreshing to see Mozilla being positioned as a platform, and not just as yet another browser. It will be interesting to see how much this platform will take off. It certainly makes sense to have Mozilla applications built around Web applications running on remote servers, since it would be easier to manage the remote content using a richer, desktop-like interface. Weblogs are a good example of such an application.

To be really successful, the Mozilla platform will need to penetrate the enterprise market. Mozilla could probably succeed better as a development platform for enterprise applications, than as yet another browser the IT departments have to support.

Having backend enterprise applications accessible as Web services would probably make Mozilla's job a lot easier, since there's no need to load proprietary code in the Mozilla application. Thus the only thing to be implemented in such an application will be only the user interface, which interacts with the backend Web services-enabled system.

Posted by ovidiu at 07:38 PM |

September 09, 2002

More on Bruce Perens departure from HP

Open Source

New York Times has an article about Bruce Perens' departure from HP. As I reported earlier, Bruce is no longer with HP.

The main reasons for his departure though seem to be related to the Microsoft baiting Bruce is doing. His latest actions are against Microsoft's backed industry group, the Initiative for Software Choice. This group is persuading governments all over the world to use highly priced proprietary software instead of equivalent open-source ones, which are freely available. Bruce started Sincere Choice to counter the Microsoft led initiative.

Posted by ovidiu at 09:29 AM |

August 29, 2002

Bruce Perens no longer with HP

Open Source

Bruce Perens, one of the original open source movement founders, is no longer with HP. He was one of the open source leaders in HP, promoting open sourcing various projects which were not the core business of HP. He was also a big promoter of Debian Linux as part of the Linux Systems Operations.

His departure follows two highly publicized cases where HP invoked DMCA to threaten a group of researchers not to publish a vulnerability of the Tru64 operating system, which HP inherited by its aquisition of Compaq. The other case where HP invoked DMCA involved Bruce Perens himself. He was asked to not give a public demonstration of a DVD region-protection circumvenition technique at the highly visible O'Reilly Open Source Convention.

As a note to the reader, my employer is HP.

Posted by ovidiu at 04:09 PM |
Cool stuff
  Arduino TinyWebServer: part 3 and part 2
More from me
Picture gallery
Copyright © 2002-2016 Ovidiu Predescu.