Wednesday, February 18, 2009

Windows USB in a Linux desktop

It's been a long time since the last post. I've been able to collect a lot of material for future posts, so I hope I have time to publish all the things I've planned.

The first post of this blog dealt with running a Windows application in a virtual machine, but having the application's window as a standalone window on a Linux desktop. This is very useful to run programs that are not available for Linux, or that cannot be replaced for others with similar functionality. Turns out that there's a particular family of programs that are usually not available for Linux, and don't have replacements.

Take an iPod Touch, for example.

I bought an iPod Touch for my dad while I was in the US, but I wanted to try it before I came back to Argentina so as to make sure it was working. Well, no luck for me. Touch iPods (at least, I don't know about others) require being activated using iTunes. There's no iTunes for Linux, and I don't dual boot my laptop. What did I end up doing? I borrowed a Windows laptop from work, installed iTunes on it, and I activated the damn thing. My Tech Support friend at work didn't like having iTunes on one of the company's laptop (even for 5 minutes, I removed it once the Touch was working), but I had no choice.

Or did I?

At that time USB support in KVM wasn't as good as it is today... but to be honest, it didn't occur to me that I could do that. Yes, I know, pretty amateur on my side. I could have installed iTunes on a Windows virtual machine and activate the iPod using the virtual machine.

Which brings me to my second problem. I have a Nokia phone. I like Nokia phones, I've grown used to the way the menus are arranged and how things work. Sadly, there's no Nokia software for Linux. Of course you can connect your phone as a mass storage device, or as a modem, but what if you want to use the Nokia software to update your phone? You can't. A friend of mine had an USB printer that printed ridiculously better using Windows drivers than CUPS drivers. He's screwed.

Or not. Enter USB pass-through. KVM lets you assing USB devices directly to a virtual machine, making the VM think that the device is connected directly to it. First, you need to make sure that KVM will see the USB devices in the host. Older KVM versions were not able to use /sys so you need to mount /proc/bus/usb:

$ mount -t usbfs none /proc/bus/usb

Current KVM, KVM-84, works without any mounting. Either way, you need to start your virtual machine with the -usb switch, which enables the USB emulation in KVM. Windows virtual machines will probably install drivers the first time you start them with USB. Once the VM is loaded, if you press Ctrl-Alt-2 you will get the console. You can use the info usbhost command to list the USB devices available in the host, and the usb_add command to add them to the guest. The only thing you need to make sure is that the device is not being used by the guest. This may need unloading the kernel module that contains the driver for your USB device, using rmmod or modprobe -r. The argument for the usb_add command is the device number that's listed with the info usbhost command. For example, if we want to add a USB SD card reader (SDDR-113):


Using the info usb command you can verify that the device was successfully added to the guest. And Windows makes the final confirmation.


Unfortunately I don't have any brand-new iPod Touch to try activating it using a virtual machine, but there's no reason why it shouldn't work. My Nokia phone did =).

Tuesday, July 22, 2008

The bleeding edge Linux desktop

The obvious advantage of closely following the development of the various components of a Linux distribution is that you get the enhancements as soon as they are deemed "stable enough" by the developer. The obvious downside of this is that "stable enough" for them is possibly not "stable enough" for you, and sometimes the bleeding edge is not stable by design.

Consider for example the Linux kernel. Current Ubuntu distributions are using a stable 2.6.24.something kernel. The last stable kernel is 2.6.26. You might say "But that's not much of a difference". It depends. Under the new kernel development model, all kernel development happens between those (not so) "minor" releases. The first two weeks after a release consist on the "merge window" in which new features and updates are merged into Linus' tree. After the merge window, 2.6.(x+1)-rc1 is published, and the rest of the release cycle consists in bugfixing the merge.

One of the things that got merged for the last 2.6.26 release was LED support for the iwl3945 driver (the driver for the wireless NIC on my laptop). I had been waiting for this since the release of this driver (there was an older driver for this card with LED support, but the driver used a closed-source userspace component, which was not cool). I wanted to have the LEDs as soon as possible. How could I do that?

The first option is to regularly get tarballs from kernel.org. That's not a bad option, but having to load kernel.org each time there's a new release or release candidate is a bit bothering. The best option is directly compiling from Linus' git tree.

To do this we first need to clone Linus' tree. Of course, if you don't have git installed you need to install it first. On Ubuntu:

$ sudo apt-get install git-core

Now we are ready to clone Linus' tree. This is the equivalent of a check-out in older (svn, cvs) terminology. Position yourself in the directory you will use as the parent of your Linux source tree and do:

$ git-clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

This will take a while. After it's done, change into the new linux-2.6 directory. To compile the kernel, we first need to configure it. Use make defconfig to automatically select the default options, and then tailor your config using the curses-based make menuconfig. Most of the defaults are sane, you should just check that the drivers for your hardware are selected. After you are happy with your config, just do make to build the kernel, and sudo make install && sudo make modules_install after the build is done to install the kernel.

There's still a few things you might need to do in order to boot your recently compiled kernel. If you use drivers that require firmware (such as wireless NIC drivers) you need to add a symlink to the old firmware directory in /lib/firmware, naming the link as your newly built kernel. After that, just do:

$ sudo update-initramfs -c -k

This will create the initramfs image for your new kernel. The initramfs holds important kernel modules (filesystem, PATA/SATA controller) that are needed to bring up the system, to avoid having to include them directly in the kernel. Next, you need to link /boot/initrd.img to your new initramfs. Finally, use update-grub to install the new kernel into the GRUB bootloader. After a reboot, you should be running your new kernel.

Or not.

Depending on when you did the clone, you could have landed in different places of the release cycle. Maybe you cloned an rc1, or worse, just after a release. This means you got a snapshot of the kernel tree in its most unstable state. The way I handle that is not pulling (git-pull, equivalent of svn/cvs update) before rc1. Even doing that, there are times where a particular bug hits your setup, making your new kernel unbootable. At that point you can do two things: you can wait for the release, or for the next release candidate, or you can look for the bug.

git gives you a very powerful (albeit time-consuming) way of finding the bug, or more exactly, the commit that broke your build. The assumption is that the tree has booted before (since you were able to compile and install the new kernel), but one commit since that moment broke your build. A simple binary seach procedure (which is time consuming since each test means compiling and installing a new kernel) gives you the commit that broke your build: the tree built and booted before that particular commit. This is called a bisect.

Let's say that you were running kernel 2.6.25 and you wanted to try 2.6.26-rc1. You clone Linus' tree and you compile, install and try to boot the new kernel, only to be greeted by some obscure error message or just an incomplete boot sequence. You decide you want to hunt down that bug. What do you do? You go to your kernel tree and do (taken from the git-bisect manpage):

$ git-bisect start

$ git-bisect bad

This means the current version (2.6.26-rc1 in our example) is bad, i.e. does not boot, eventually it could mean another problem with the kernel.

$ git-bisect good v2.6.25

2.6.25 was the last version tested that was good, i.e. booted correctly (in our example). git will calculate the commit halfway through both versions, and arrange the tree to reflect that version. Now you need to compile and install, reboot and test the new kernel. If it works, you can use the working kernel to do git-bisect good. If it doesn't, you need to boot another kernel and do git-bisect bad. Either way, the binary search will continue (obviously in different directions in each case). In a logarithmic number of steps you will find the first commit that makes your kernel unbootable.

Usually, at least for most people this commit serves more as a hint of where the problem is rather than as a pointer to a bug. A kernel developer might start debugging that commit, I usually try to see the configuration options associated with that commit to see if I can work around the bug (in the best case, disabling the feature). In any case, you have a clearer idea of where the problem is coming from, and possibly this can lead to more information if you search the web.

Some personal experience: the 2.6.26-rc series failed to boot on my laptop due to a SATA-related change that was introduced in the 2.6.26 merge window. Using bisecting I isolated the problem to a specific SATA feature that was changed in that merge window. Disabling the feature made my kernel boot as before =).

Monday, May 26, 2008

Remote music in a Linux desktop

I hate having to mount a Samba share in my laptop to access the music on the desktop machine at home. Why a Samba share you ask? Well, I haven't yet converted the rest of my family to The Linux Desktop, so there's still Windows running on the machine with the big hard drive and all the music.

In fact, I don't really care about having to mount a share to listen to music. The problem is that half of the times I forget about mounting it before opening Amarok [1], and half of those times I'm not even at home so I couldn't have mounted it anyways. Therefore, when I open Amarok at that point (either because I forget or because I want to listen to other music, such as Internet radio), it mistakenly assumes that my Collection is missing. Then, when I launch it back home with the share mounted, it has to update it.

Clearly, the problem is that Amarok's Collection was not thought as something temporary, that could be present or absent while the program is running. It's true that Amarok detects if the folders comprising the collection change, so that when you add things to your media folders they get almost instantaneously added to the collection. However, that's not enough when all your music can be present or absent at any given time while the program's running.

The solution turns out to be extremely cool. iTunes users are probably thinking "but I can access other iTunes in my home network and stream music from one iTunes to another". Well my friend, that's exactly what we are going to do. iTunes uses a protocol called DAAP that together with Apple's Bonjour (a set of tools to autodiscover services in the local network) allows iTunes in differente machines on the same network to discover themselves (using Bonjour) and stream music (using DAAP).

Since the machine where I have my music is a Windows machine, I will have to use a Windows DAAP server. The one I found most convenient was Firefly. There's also Tangerine, but it crashed when I tested it (and it requires the .Net Framework).

The instalation of Firefly is pretty straightforward, except for the fact that it requires Apple's Bonjour for Windows, so you have to install that first. Firefly allows you to select a name for your DAAP server, and the folder you want to share.

Back in Amarok, DAAP shares are added as Devices. A good Zeroconf setup in your Linux machine should supposedly discover those shares, but if they don't come up, you will have to add them manually. That's Settings from the toolbar, Configure Amarok... from the menu, Media Devices from the left column, Add Device... in that window.


We want to use the Music Sharing plugin. The name only identifies the share, it's not necesary for it to be the same as the name configured in Firefly.

After this, you should be able to add a DAAP server to Amarok:


You need to enter the server's IP:


Firefly's default port is 9999. I can now have all my music in my laptop without worrying about mounting shares:


Cool!

[1] We will have a post about Linux media players soon.

Thursday, May 22, 2008

The composite and virtual Linux desktop

(Coming back from a semi-long break with a short post.)

The default way to interact with a KVM virtual machine is through the SDL screen. It would be really cool to have the guest OS draw a virtual window that is the same size as the physical screen in which one is seeing that window, so that then one is able to use the guest in fullscreen without needing to change video modes.

ESPECIALLY

When one owns a 1280x800 laptop and all 4:3 resolutions are distorted when going fullscreen.

My Ubuntu Hardy 32 bit guest automatically configured itself to use 1280x800 even when running with the standard Cirrus video card. For my Windows Server 2003 guest I have to use:

-std-vga

in the command line to use the standard Bochs VGA adapter, which supports 1280x800 in Windows. Maybe the Cirrus adapter supports that resolution now in Windows (I reinstalled the Ubuntu guest some time after I installed my Window guest so things might have changed) but at some point it was not possible.


Now both of my guests use 1280x800 as their resolution and thus look good when fullscreened.

EXCEPT

QEMU's fullscreen had trouble with Compiz in my old Xubuntu Feisty 64 bit host. I had to manually disable Compiz to avoid my QEMU/KVM window to fall back to windowed mode after I pressed Ctrl-Alt-F (which is the key binding to go fullscreen).

Now I updated to Xubuntu Hardy and as I made a clean upgrade, I no longer have Compiz installed (it comes by default only on Ubuntu - though I don't know about Kubuntu). However, this led to a cool discovery: XFCE's window manager, xfwm4 now comes compiled with compositing support:


This means that I can get transparencies and use AWN (thedock-like navigation bar at the bottom of the screen) without sacrificing the possibility of having my guests fullscreened (which is really really cool). I have to retry Compiz with QEMU/KVM in full screen, but for now, SCORE!

Sunday, March 2, 2008

The OpenGL virtual Linux desktop

One of the holy grails of desktop virtualization is graphics card virtualization. Linux users everywhere (me included) drool at the thought of being able to use Windows games in Linux desktops without dual-booting.

There's a Google Summer of Code project called VMGL that moved us closer to making this a reality. Right now, however, it's more of a proof of concept than anything else: it allows a Unix guest using X11 to have the host do its OpenGL processing. Nonetheless, it's a very impressive proof of concept, and there's esentially nothing in the approach that would make it incompatible with a Windows guest (though it would take a lot more work).

The basic idea is to have the OpenGL library in the guest contact a component in the host to process the OpenGL commands, instead of doing that in the guest. The connection between the host and guest is network-based, which allows for a certain level of hypervisor independence. In fact, it was orignally planned for Xen, but I've successfully run it using KVM (and so have others).

So, I downloaded the project's tarball and tried to build it on my 64-bit Ubuntu host. It built a bunch of libraries and then it failed when it came to build the VNC stuff. Turns out that there are 3 basic ways of using VMGL:

  • VNC

  • Patched SDL viewer for Xen

  • X forwarding, either using X itself or SSH

The VNC options use a patched version of TightVNC that's included in the tarball (under the tightvnc/ directory). Unfortunately, it seems as if this particular version has trouble being built on 64-bit platforms. Luckily, the author included patches for the regular TightVNC tree. Therefore, I downloaded the lastest version of tightvnc and I unpacked overwriting the tightvnc/ directory. Then patched the TightVNC sources using the included patches:

vmgl.hg/tightvnc$ patch -p1 < ../extra/patches/vncviewer.patch vmgl.hg/tightvnc$ patch -p1 < ../extra/patches/Xvnc.patch

Now everything builds great =). I then moved to build VMGL in my 32-bit Ubuntu guest. Everything went great, and now I was ready to do some testing.

WARNING: If your guest is also 64-bit, you have to do more patching. Not using the VNC option requires loading a VMGL X module in the guest, which comes as a 32-bit binary. You will need to download the X sources and patch them. I haven't tried that.

I chose to run using X forwarding. Ubuntu users can enable TCP access to X in the "Login window" settings. If not, you will need to edit /etc/gdm/gdm.conf (or kdm.conf if you are a KDE user, or xdm.conf if you use the plain X Display Manager, any of them of course in a sensible /etc folder) and allow TCP connections.

Now comes the fun part. As I said in the last post, QEMU's default networking uses a usermode network stack. This is great to make VMs have the same network access as the host, without any need for superuser priviliges. However, it sucks for VMGL. We need to use tap based networking. There are great tutorials in the KVM wiki.

Once you've set up the networking, you should be able to run something relatively similar to what I ran:

$ sudo qemu-system-x86_64 stuff/images/ubuntu.img -net nic,model=virtio,macaddr=AA:BB:CC:DD:EE:FF -net tap -m 512

If you're wondering what that virtio stuff is, that's KVM paravirtualized IO infrastructure. It will make a difference for VMGL if you try and run an OpenGL game like Quake 3 in your guest. Yes, you can run Quake 3 in your guest with VMGL and it plays very well. Yes, that's totally cool.

Now that you have networking setup correctly, you can allow connections from your guest into your host X:

$ xhost +[your guest's IP address]

Following the VMGL readme, now we have to start stub-daemon in the host:

$ stub-daemon

We then see the port that we need to connect to from the guest:

$ tail -n 1 .stub-daemon.log

Now in the guest, we first set the port we got from the line above:

$ export GLSTUB=[your host's IP address]:[the port you found above]

Finally, we set the display to the host's X server main display (i.e. your desktop):

$ export GLSTUB=[your host's IP address]:0

*drumroll*

$ glxgears -info


That's really cool.


But Quake 3 Arena is better =).

Saturday, February 16, 2008

The (virtual) Windows-on-Linux desktop

It's kinda weird to start a blog about the Linux desktop with a post about virtualization, but here's the deal: I know that virtualization is the next big thing (or the current big thing) when it comes to servers. I think that it can also be the next big thing on the desktop.

Many times people don't like using Windows. They don't like having to buy a new computer to use a new operating system, they don't like having to reinstall the operating system to solve problems. You know, all those things that one is pretty much forced to do when one uses Windows. However, they don't switch. I honestly believe there are GNU/Linux distributions that are ready for the desktop. So, why won't they switch?

Why?

Well, people don't use operating systems. People use applications, and rely on operating systems to run applications. When you suggest someone to switch to Linux, the first thing they tend to ask is "Will X work with it?". A huge number of times, X is a program. The best answer one can give is "Well, you can use Y to do that".

That's not good enough for them.

Our answer should be "Yeah, don't worry about that program, you can still use it". And I'm not talking about Office. I'm talking about Photoshop. Corel Draw. Or that crazy internal app that the office uses to do that weird thing that they need to do because they are doing business in that niche area.

We can already migrate most of the desktop functionality to open source alternatives. Hell, we can already introduce those alternatives in Windows, and then move people to a new operating system when they are familiar with the new applications. But we cannot migrate all the functionality. The bad news is that most of the times that's a deal breaker.

(Everything should look gloomy and hopeless now.)

Luckily, the arms race engaged by CPU makers and fueled by Microsoft operating systems and gamers has given us machines with ridiculous amounts of processing power. We can run two operating systems on one machine.

Yes, we can. Even if one of those is one from Microsoft. Just make sure is not the last one, and you'll be fine. OK, you can even make it the last one, I've actually tried that. And that's why I think virtualization has such a bright future in the desktop. Parallels Desktop and VMware Fusion are an example of this. They cut the tether between application and operating system. You can run the operating system you like, not the one you are forced to run because of one single applicating that you need to run.

Enough of this chit-chat. Let's look at an example: Argentina's tax software is only available for Windows. I need to run it on my Ubuntu laptop. I'm going to use KVM and rdesktop to run the application in a Windows virtual machine and use it without having to see the full Windows desktop.

KVM makes use of hardware virtualization extensions present in recent Intel and AMD processors. To know if your CPU supports KVM, just do:

$ cat /proc/cpuinfo | grep "vmx\|svm"

If anything shows up, you have a compatible processor. If not, you may still have a compatible processor but the extensions might be disabled in the BIOS. The KVM FAQ has suggestions on how to deal with this.

In the case that your processor doesn't have virtualization extensions, you can use other programs such as VirtualBox (which is mostly open source) or VMware (which is not, and therefore doesn't get a link =P). These programs use other trickery to run unmodified guest operating systems in x86 boxes. As long as you can open a network connection between guest and host, you should be able to use the last part of this post to run Windows programs directly on your Linux desktop.

First we have to compile and install KVM. KVM is already shipped with most distros (it's shipped with Ubuntu) but the packaged versions tend to be quite old, especially for a young project like KVM which still moves pretty fast. Just follow the download link from the main site and get the current version tarball (kvm-60.tar.gz as of this).

If you are using an Ubuntu or Debian system, you can now do:

$ sudo apt-get build-dep kvm

To get all needed dependencies.

Unpack the tarball to your usual folder (I use something like /home//local) and do:

$ ./configure --enable-alsa
$ make

KVM uses QEMU to provide the virtual machine with devices, so we need to make sure that KVM compiles QEMU with ALSA support if we want to get decent sound from the guest.

If you weren't able to check dependencies, the configure script probably complained. Use your favourite package manager to check for the missing packages or (God help) download and install the necessary tarballs. The easiest way is to have your package manager do a search for the name of the missing library in the name and description of available packages (and maybe add "dev" to the search so it shows development packages).

After a while everything should be built and you can do:

$ sudo make install

And KVM should be installed.

Now we have to install the guest operating system. You can use any flavour of Windows, but Windows Server 2003 (or anyone with Terminal Services) will give us the possibility of having just the window of the application we want on our Linux desktop. You will see. Of course, it should be a Windows version for which you have a valid license (we are trying to move away from Windows, but in a legal way).

Before performing the actual install, we have to create a disk image. What's this? It's just a file on disk that represents a real HDD for the guest operating system. We can do:

$ qemu-img create 2003.img -f qcow2 10G

That will create a 10-gig image file using the QCOW2 format (the native QEMU image file format) in the current directory. The cool thing about the image is that it "streches" to fit the data the guest writes to it. Right now, just created, the 10-gig image file occupies a mere 50 KB in my machine.

Before we install the system, we should make sure that we can run KVM without having to be root. There's a file in the scripts folder of the KVM tarball called
65-kvm.rules. Just copy it to /etc/udev/rules.d/, make sure to add a kvm group to your system, and add your user to that group.

KVM works as a kernel module, so we should load the appropiate kernel module before starting. My laptop has a Intel Core 2 Duo processor so I will use the kvm-intel module. Homework: give the name of the module to load if you have and AMD processor.

$ sudo modprobe kvm-intel

Now, let's say I have a Windows Server 2003 ISO in /home//stuff/iso/win2003_sp2.iso:

$ qemu-system-x86_64 -cdrom /home//stuff/iso/win2003_sp2.iso -m 512 2003.img -boot d


Will boot the ISO as a CDROM and start the installation. The command-line options are pretty self explanatory, and the QEMU command-line help is good otherwise. I'm giving 512 megs of RAM to my Windows guest.


That's how the install looks as a virtual machine. The title of the window says "QEMU/KVM", that's how we know that the KVM module is loaded and being used. The install will go on as a normal Windows install. Finally, after another reboot, we will get a login prompt.

I always choose to create a user right away. One good thing to do (that the great QCOW2 image file format allows us to do) is to shut down the machine after the install is done and make our image file read only. Then we can create another image file:

$ qemu-img create win.img -b 2003.img -f qcow2

The -b switch says that the new image will look like the old one, and just store the changes. In this way, if we fuck up from here onwards we can always have a clean Windows install available (which won't get modified even if we want as we have made it read-only). Neat, huh?

After you create a user, you might want to do some configuration (probably ramp up the sad 640x480 default screen size) and install the software you need (in this case Argentina's tax software). You should have a working network connection, so you should be able to download the programs you want. QEMU documentation shows ways of sharing directories between guest and host, but that's for another post. Argentina's tax software is downloadable, so I don't need to do any of that =).

Then we have to enable Remote Desktop. Right click on My Computer (you can find it in the Start menu, does not seem to show on the desktop by default), Properties, Remote, "Enable Remote Desktop for this computer". Check that your user has access and you are done. Don't forget to write down the full path to your application's executable, we are going to need it.

We don't need rdesktop to actually use our application. However, the default screen we get with KVM has some caveats, the biggest one being not having cut-and-paste and having to let the window grab the mouse to get mouse events in the guest.

rdesktop should be packaged for your distribution. Once you have it installed, restart your Windows guest with the following command line:

$ qemu-system-x86_64 -m 512 win.img -redir tcp:3389::3389 -redir udp:3389::3389

That will allow rdesktop to connect to the guest. QEMU enables by default a networking mode known as "user mode networking". This basically means that the QEMU process fakes a router that lets you use your guest as if it was behind a really paranoid firewall blocking every inbound connection, but allowing things to go out. In this way the guest can access everything that the host can access (for example the Internet) as long as the guest starts the connection. It has a DHCP server and everything so networking should really come up automagically in the guest.

Finally, we launch rdesktop:

$ rdesktop localhost -s "C:\Program Files\S.I.Ap\AFIP\siap.exe" -S standard -D

The -s switch allows us to run someting else than the normal shell, the (in)famous explorer.exe. The -S switch tells rdesktop to listen to minimize events in the guest window and replay them in the host (this will let us minimize the Windows application clicking on the Windows minimize button). Finally, the -D switch takes away window manager decorations, so we finally have:


Like I promised, a Windows window (pun intended) over a Linux desktop, with full cut-and-paste and no Windows desktop in sight. Cool!

I've been meaning to write a simple application that will automate this process but I cannot seem to get the scope right. I don't want to reinvent the wheel and end up with a full virtual machine management tool like the half-dozen or so that are already available. I want something simple that will let people use their Windows programs in Linux without having to go through this post. Broad scope and functionality suggestions are welcome.