A Simpler MRU for GTKMM C++

08/15/2014 § Leave a comment

I guess I’m a glutton for punishment, or like the little bird who flies against the wind; I really enjoy developing desktop applications. Becomming something of a lost art, now its all about apps for phones and web applications. But I still really enjoy using my desktop and writing applications for it.

The complaint I most often hear from people opposed to adopting Linux as their desktop os is the lack of applications for it. Well, that won’t change until people start writing those apps.

As far as desktop application design goes, everything should be as simple as possible, or so m philosophy goes. All components of application design should follow that simple rule where its makes sense, and I see no reason for that principal to not be included in the humble RUF, or Recently Used File list, also known as the MRU, or Most recent, so on…

The MRU (from now on) is a tradition, should be available on any mouse driven GUI, and should be easy to implement in any toolkit or framework. Imagine my surprise when I started doing the research on using the MRU object as implemented in GTKMM; its pretty much the most complicated collection of classes I’ve ever seen. And to display only the recent files relevant to your application you need to employ some sort of filter, and then there’s all the complaints I read about those objects on the web, and after that I stopped worrying about it and wrote my own class.

I have a lot of respect for Murry Cumming and the GTKMM team, and the whole GTK+ project, its a huge effort to maintain a unified set of APIs and keep them working for a moving platform like GNU/Linux/Gnome, I am surely aware. I’m also aware that there are usually a huge number of underlying reasons as to why a developer or organization implements a feature set the way they do. But sometimes you just want a thing to just work the way you want them to.

When I got a little deeper into how GTKMM’s RecentChooser classes (there’s the base class Widget, a dialog object, an action object, as well as a menu object, then the filter you need to employ, and on and on) I simply shrugged my shoulders and told myself “I’m not doing that”. I get all the variations, obviously the action object is so you can add a history option to an edit menu, whatever. I just wanted the user to click on a dynamic menu that contained an MRU.

So with the history out of the way I bring you a simpler method using the STL and GTKMM’s own menu api:

My current application for the gnome Desktop is a classic Model-View-Controller impl with a Menu Bar, and of course under the File item is my MRU.

An MRU at the simplest level is a FIFO, and std::deque is perfect for that job. In my application’s data class (a collections of structs, really) reference to a std::deque object.

I started by adding some file menu entries, file 1..4, and binding them to a dummy function. I knew I could change the menu label and bind them to an actual function later (from my application’s Glib::ustring ui_info);
(word press is having conniptions with the xml-heavy Gtk UI string, so look in the sample code for this)
(And from my app’s Gtk::ActionGroup object):
m_refActionGroup->add(Gtk::Action::create("file1",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file2",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file3",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file4",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));

The “on_dummy” method is just an empty method, we need that because the ActionGroup template demands it be filled in, we’ll fill it in with the real method later.

My file open menu item, when activated, as well as opening a file, takes the path it got from a file picker and sends it to a method that inserts the file into the deque object, after checking the current size of the deque:

In a header file we have these declarations (more on the signal array later):
std::deque mru; // our deque object
sigc::connection mru_sig[4]; // dynamic menu signals

Then in the implementation file, in our “mru manager” method, app is just a pointer to an “application structure”, a struct with the deque object, among other things:

// If the deque is more than four, we need to pop one file off the que
if(app->mru.size() >= 4)
app->mru.pop_back();
// then add the new file
app->mru.push_front(str);
app->mru.resize(4);

Pretty simple stuff. Now, every time a file is opened it’ll be placed in our deque object, and round robin rotated to the bottom of the list ever time a new file is placed on it. In this case I’m keeping the number of recent files at 4, but it would be simple enough to adjust that number or make it user configurable if one wanted by adding an integer class member and using it instead of the “4” constant above.

Then comes the re-assignment of the menu signals, earlier in the method code I point to some empty Gtk::Widgets with a simple array of pointers:

Glib::RefPtr _refUIManager; // Typical GTKMM stuff
Gtk::Widget* file[4];

file[0] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file1");
file[1] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file2");
file[2] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file3");
file[3] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file4");

The paths refer to the ui file menus in the Gtk XML gui, if you’re familiar with Gtk desktop programming with GTKMM you should be aware of how that works. We’ll need these as references for the dynamic menus we’ll be connecting the activation signals to. Speaking of which, here’s how those signals are connected to our menu items;
int n = 0;
for(deque::iterator it =
app->mru.begin(); it mru.end(); ++it) {
const Glib::ustring& label =
(*it).substr((*it).find_last_of("//") + 1, (*it).length());
dynamic_cast(file[n])->set_label(label.c_str());
app->mru_sig[n].disconnect();
app->mru_sig[n] = dynamic_cast(file[n])->signal_activate().
connect(sigc::bind(sigc::mem_fun(*this, &ExampleWindow::on_mru), label));

if(dynamic_cast(file[n])->get_label().length() > 0)
file[n++]->show();
}

We iterate through our list of 4 file paths, take out the last bit for the name we display in the menus, and then we do a generic signal disconnect on the item. If we don’t signals will stack up on the item and we’d have several file paths flying at our “open file” method.

We then connect a new signal bound with the path data we want the menu item to open.

The signal method is simplicity itself:

on_mru(Glib::ustring& label)
{
deque::iterator it = _app->mru.begin();
for(; it mru.end(); ++it) {
const Glib::ustring& text =
(*it).substr((*it).find_last_of("//") + 1, (*it).length());
if(text.find(label) != string::npos)
cout << (*it).c_str() << endl;
}
}

The bound text from the label is searched for in the deque object, if we have a match we have our full path to the recently processed file.

There, a functioning MRU for a Gnome Desktop Application without the hassle of Gtk:: RecentChooser. Couple that with a way of serializing the list* for between sessions and you have a full fledged MRU feature for your app.

There’s always room for improvement; traditionally MRU’s have key accelerators “1, 2, 3…” and that would be a nice touch, and simple to add. And ideally this should be a fully encapsulated object, a drop-in, rather than the collection of methods to a main class as presented here.

The sample code shows a simple implementation using GTKMM 2.4, but I don’t believe there’s anything in the code preventing it being converted to 3.0. I hope this will help developers create more applications for Linux, we need all we can get!

There’s a very simple sample you can grab here, build it with:

g++ -Wall -std=c++11 examplewindow.cc main.cc -o menu `pkg-config --cflags --libs gtkmm-2.4`

*Keep your fingers crossed, I may publish a much simpler alternative to Gconfmm.

Advertisements

I Did It Again

07/09/2013 § Leave a comment

IMG_8444The Retina 13″ is on top, the previous MackBook Pro I have is in the middle, I think

I’m so ashamed. I swore off Apple products for ever and here I am again with a new McBook Pro Retina 13″.

I love my Acer S3, but it has problems. Its got 4 Gigs ram, total. The keyboard is prone to spurious typing anomalies (broken words, typing errors, lots of them). The resolution is really low, even for an ultrabook in 2013. The battery lasts 2 hours on a full charge, 2 1/2 if you really pack it. In 2013 those stats are rediculous. Plus I’ve had a banner year so far so I had some spare bux burning a hole in my pocket.

First I went to the nearby Fry’s Electronics and took a look. What I look for in an ultrabook is light-weight and power. I look for the lighest book with the most Ghz I can get. Then I look for RAM, expandability would be nice but that’s REALLY hard to find in an ultra. So, given that the RAM will be static in size I try for the most I can get. That’s also hard. It was impossible to find an ultra with more than 4 Gigs two years ago, ALL the manufacterers were worried about price plus meeting the minimum specs for running Windows 7, so 4 Gigs was the most they were willing to fit the new, hot-selling ultrabook phenomenom with. Now that things are a little more relaxed its easier to find ultras with 6, and even 8 gigs. Another thing I crave is low weight. I know I ask a lot but as a consultant I travel a lot and weight is serious consideration. One thing I really don’t need is a book with a light drive (you know, a CD/DVD drive.) I needed to use one last year to install Windows XP on an old but tiny pc I wanted to use as a media server, but before and after, rarely. If you feel like you need to use plastic light media for anything you need to get aquainted with modern SD Multi Media memory devices. Ever breath on a CD and all of a sudden not be able to read it? I have) yet they were difficult to find, being larlgely relagated to the Japanese market. Lately however that hasn’t been as much of an issue and light-driveless books are easy to come by here in the states.

At the Fry’s nearest to my house I wandered about the notebook aisles until I spied a really great number that met all my criteria. It in fact looked a bit smaller than typical ultrabooks, but at 8 Gigs RAM it would have worked quite well, and I wanted it.

Is there anything worse than a retail store that won’t sell you something? I don’t think so. I found a sales droid and showed her the ultra I wanted to purchase. She spent the usual 10 minutes fumbling about doing who knows what and finally came back and told me should coudn’t sell it to me. I asked her for the display model. She said she couldn’t sell me that one either. Seeing red I left the store. I should have looked on-line for the model and probably would have gotten it cheaper but I was really pissed off. I was on a mission now.

If you’re familure with Fry’s you know its the one retail brick store that, like Mitt Romney’s “binders full of women” has aisles full of notebooks, there’s really no other place like it. The help is utterly worthless but the sheer number of models on display can’t be beat. The only other place better WAS CompUSA, may that establishment rest in peace. So my only other shot, though I was loathe to take it, was another Fry’s. So I decided to haul my butt to the next nearest one, which happens to be the Fry’s in Palo Alto. THE Fry’s. A Fry’s in San jose is certainly near the pulse of Silicon Valley, but the Fry’s in PA would be in the Valley’s heart beat. This is near Stanford University and Page Mill Road, the valley’s trail of venture capital repositories. THE Fry’s did indeed have a number of models available on display, but not the make/model of the one in San Jose that I wanted. But what it did have was a full selection of MacBook Pros with the Retina display. I took a look at the Retinas. Damn the display was pretty. They had both MacBook Airs and the “classic look” Pro models, the new ones. The smallest one caught my eye; it was just like my older MacBook Pro but considerably smaller, and with that increadible Retina display. I also knew that my keyboard issues with the Asus would be completely gone. The crisp MackBook Pro kb design is probably the best in the business. I also knew that I would have problems running the software that *I* wanted to run on it. The latest MacBooks use the new Intel boot process known as Unified Extensible Firmware Interface, or UEFI, and like anything unknown the human reaction is to fear it. Which I did, but its the replacement for BIOS, and not going away. It also complicates Linux installation. Thankfully it doesn’t prevent it, which I first feared, it simply complicates it.

In an effort to be both entertaining, relevant, AND useful let me breifly summerize the process of installing Linux on a Retina. And let me preface the process by explaining that I have absolutely NO use for MacOS, sorry mac fan boys. And I have a larger MacBook that runs Windows 7 when I need that, I also stuffed 16 Gigs of RAM in the thing so I use it for running virtual machines (usually other versions of Ubuntu, the embedded & thin client world is going nuts for Ubuntu for some reason). What I wanted was a small, light, powerful book for traveling with MORE RAM. Since most of my work is on Linux, that’s what I wanted to run.**

First thing you’ll want to do is install rEFInd, and use the “binary zip file”. Don’t get too caught up in the wordy web page that is the rEFInd home page; the author spends WAY too much time explaining the story of rEFInd in tangents. After resizing your disk execute the install.sh script as root using the “–esp” and “–drivers” options. I’m not sure that the drivers option is absolutely nessessary, but the esp one is. If you don’t specify it refind won’t get installed on the disk and when you reboot the machine Linux won’t boot. I went ’round and ’round on that one. Then reboot with your Linux distribution ISO of choice written on a plugged-in USB dongle. There are some instructions on the net saying you need to write the ISO in a special way for MacOS, I didn’t find that to be true. You should see a new boot manager menu with an Apple logo and a generic USB symbol as button selections. This is the rEFInd boot manager. Select the USB option. Your choice of Linux should be a fairly recent so as to take advantage of the EFI boot process, if you insist on using an older distribution you’re on your own, I have no idea what BIOS-based distributions work on the EFI system of the MacBook Pro Retina. After the dry run system (if your distro has a test drive desktop, I think most do now) boots up go ahead and double click the install icon. Installation is the same as always, but be very aware of what you are doing during the disk editing part of the install; you’ll be presented with a gparted (or whatever they do with KDE based distros) dialog. Go ahead and partition the main slices however you want; BUT DO NOT DELETE THE EFI PARTITION. If you want to use the Linux as your sole OS on the Retina thats fine as long as you do not touch the ~200 Meg boot partition at sda1, or whatever device node your boot disk is (usually sda1 on Debian systems). This is the partiton that should clearly be labeled “EFI” in the gparted partition list. I wanted to use this book soley for linux, so when I got to this step I blithely deleted all partitions and created a main slice and a swap area, which normally would work fine. I installed Linux (Mint in my case) and when I re-booted: NOTHING. The machine wouldn’t load Mint.

After doing some research I learned about the newer EFI boot process, that rEFInd was needed to install a new boot loader, and that you don’t want to re-construct an EFI boot partition from scratch. After messing around with re-creating EFI boot partition structures for 3 days (They have to be a certain size, have a certain directory structure, have certain files…) I finally re-installed MacOS Mountain Goat* or whatever and re-tried my Linux installation, this time without messing with the EFI partition. It worked like a charm, my new Retina was running Mint 15.

Here’s some after install pointers, points; I had to install and open up the curses-based alsamixer app and unmute all the sound devices, simply uping the volume controls or messing with them in any way using the usual gnome controls didn’t give me my sound. I also edited /etc/modprobe.d/alsa-base.conf and added “options snd-hda-intel model=mbp101″ as the last line in that file. The HDMI port on the right side doesn’t appear to work unfortunately, and neither does a minidisplay port to HDMI adapter. I was really looking forward to having HDMI out. I don’t know if a miniport to VGA or DVI dapter will. Also this book appears to have two display adapters, one from Intel and one from nVidia; don’t install any of the many nVidia driver options available in the repositories, they don’t appear to work, while the Intel driver works great. Its kind of wierd getting a full 2560×1600 resolution on a 13” notebook LCD. That resolution is so high that I had to step on it a bit to make everything readable. I re-compiled a mandelbrot generating X app I wrote that also prints the execution time in the shell if its launched from that and running it on the Asus took about 9 seconds; on the Retina it takes 5. I get the sense also that this thing has four full core i5 @2.5 GHz processors, not just two real and two virtual ones. I’ve also read reports of the Retina running very hot on Linux, but I’ve not noticed this.

The 13″ Retina is a very powerful ultrabook, a true “Ultra”. I love it. Its really the perfect size with the perfect power and RAM. It’ll run at least twice as long on a full battery charge as my trusty-but-slower Acer S3. I’m looking forward to doing a lot of work on it. I hope linux developers down the road get the ports working, but that’s not going to hold me back.

UPDATE: I spent the latter half of my yesterday building and installing the 3.9 kernel and some Intel support libraries and viola! The HDMI port works!!! I’m staring into the warm glow of my Vizio 26″ HDTV as I type this. Its funny, the Retina’s LCD is STILL higher rez than the Vizio, but its nice to have a “console” sized display. The MicroSD slot on the right works too! I LOVE THE RETINA!! Pricey, and locked down as far as RAM & SSD go, but I’ve come to live with that from Ultras. If you’re looking to run Linux on the 13″ Retina, follow the above directions and then grab the 3.9 kernel and install it. Also grab the intel graphics stack components here. After installing everything (yes, I went ahead and compiled everything from source, getting missing libraries from the baseline repositories when they popped up) I had control over my HDMI and SD ports.

* I have to say that Apple really saved my ass in this regard; the 13″ Retina (and I assume all the latest Pros) don’t come with much in the way of paperwork or media, almost none at all in fact. Just the usual worthless warranty “square”. There is no Mac OSX install disk, nothing. Just the MacBook and that funky, little white power supply. Scary, but in some ways refreshing for a faux minimalist such as myself. Re-installing Moutain Lion was a simple matter of hitting an option-R key combo during the boot process, using the disk utility to re-partition the drive the right way, and then selecting the Mac OS re-install option. Apparently, since I had already configured the book to use my wifi it simply retrived that configuration from *wherever* and went to town. After a warning that the re-install process would be slowed by my use of wifi (a hard ethernet connection would obviously be faster, but who cares?) it automagically just connected to an Apple server (I assume) and re-installed Mountain Lion. The whole thing was really kind of amazing from a geekly perspective and very easy.

** The Apple droids will say that MacOS is a version of Linux. No, its not. It resembles it in better then superficial ways, but its not.

Building Kernel Modules with Autotools

05/06/2013 § 2 Comments

System Call Graph - ApacheSystem Call Graph - IIS

The above graphs represent one of the reasons I really love Linux; the one with the incredibly horrible tangle of tenticles to system services is the system call graph of IIS, the other Apache. Whomever designed Apache’s architecture is a virtuoso of simplicity. You can see this same philosophy repeated many times in most of the tools and utilities in the stable of Linux’s OSS offerings.

On the other hand Linux has a vastly varied and espansive collection of tools, many that do the same tasks in very different ways. It can be a nightmare trying to tame all the different approaches to solving problems that the world of Linux developers takes on, not to mention all the architectures and hardware platforms Linux supports.

Autotools is one such tool. Autotools addresses the issues that are typically encountered when trying to create applications that can be deployed on many platforms. The main complaint regarding autotools, and certainly justified, are that it has a very high learning curve, its syntax is among the most cryptic of any digital tool out, and one that I’ve encountered time and again is that its not very well controlled with regard to versioning. Interestingly, at least in my experience, however, it seems to work very well, I’ve rarely encountered problems with it. When well implemented it really runs like a champ.

Replacement configuration tools have been proposed and created, probably the most famous is CMake, which works very well and is quite a bit easier to understand. But I really had my heart set on building a kernel module with autotools; 1) becuase I’m a glutton for punishment, 2) becuase autotools is the traditional way of doing portable things on Linux 3) its far older than any of other tools 4) has an very big array of options and is quite powerful, and 5) if I was successful I would have something really cool to write about on this blog. Autotools is not without its problems as very well documented by thousands of developers around the world though. One of the biggest problems I can see is that the autotools “suite”, as it were, are not very well synced. Or put it another way, not synced in a way that is easy for the end user to understand. Different versions of each tool can have adverse effects on the others and its not clear at all at what version of what works with what. For example on my Mint 13 system I have libtool version 2.4.2 dated 2011 and autoconf version 2.69 dated 2012; sounds good, right? But for some reason the series of commands that leads to libtool creating “libmain.sh” stopped working, I have no idea why, I didn’t install anything new with regard to libtool that I know of. In googling this situation I replaced my invocation from “libtool –automake” to just “libtool” followed by “autoreconf -i”, which got me back on track. My previous train of invocations, which was “aclocal”, followed by “autoheader”, then “libtoolize –automake” on down to “autoconf” just stopped working. I don’t know why. And I can’t really determine what invocations are old and what are recommended in the present state of these tools. I’ll show you how I got this to work, and maybe some one can enlighten me.

In researching this task I found a lot of web comments that went along the lines of “look, just use KBuild, forget autotools”… well, again, “interestingly”, what I’m going to show you doesn’t do away with KBuild, its nessessary for building kernel modules (as far as I know) no matter what your build system is, even if its a simple shell script. But the comments were disheartening, leading me to believe that my quest would end in fail. Then I happened upon a small kernel driver, LiMIC, which is some kind of driver for MPI node communication in Linux computing clusters. And in the tiny archive was a tiny example of a kernel driver using autotools. An example is worth a thousand words.

I sat down and tore the archive apart and examined every config file involved in the autoconf process for this driver. In only a few files lay the wisdom I’d been searching for. An example is good, but its not much better than worthless if its made up of thousands of files and tens of thousands of symbols. LiMIC is perfect, its a fortune cookie of technical wisdom. And the fruit of that wisdom I share with you today. Another point “interesting” (I shall try not to over-use that word) is that I’m surprised at the small number of tricks that must be used to get autoconf to work the way we want it to. A very tiny peice of a makefile must be placed in the source directory as well as a fake .in file, otherwise automake complains of a missing .in file.

First, download this archive. Inside you’ll notice a an .am file, an .ac file, and a “module” directory. In that is the source of the old chardev driver, the fake .in file, and the abbreviated Makefile.

The versions of everything involved in my effort are here:

  • libtool 2.4.2
  • automake 1.11.6
  • autoconf 2.69

Starting with the the autoconf.ac we have AC_PREREQ, which contains the version of autoconf we want. I have no idea what is supposed to go here, as I don’t know what versions of autoconf have what features I need. I simply put the version of autoconf I have installed on my system. Backreving that number might be a good idea, or it might not, I don’t know. After that is AC_INIT, this contiains fields such as the name of the project, the version, a project-related url, and I believe it can contains a number of other fields. I have no idea what else can go in there and I’m not too sure it matters. Followed by AC_CONFIG_SRCDIR, as the name says this tells autoconf where the top of the source tree is. Next we have AM_INIT_AUTOMAKE, this initializes automake (surprise). Murray Cumming, team lead of Gtkmm reccomends adding the “1.10 -Wall no-define” fields, with an optional “foriegn” added so we don’t have to deal with all the readme and info files as is the Gnu style of doing automake. I took that option. as well. This is followed by instruction telling automake to add the m4 make macros and what to call the automake porting header file. I’m really confused by the current state of m4 macro processor, it seems to have different behavior based on the versions of the other tools in the suite. Not unexpected behavior when dealing with technology, but I can’t begin to qualify or nutshell what behaves like what when invoked by the other. I just know I got this stuff to work.

Now we get to the important part; these are what distinquishes a kernel project from an application. First we use AC_SUBST directives to pass to the resulting make file where things are. In the next few directives we pass the kernel version and location, and some directives that differentiate a kernel module from sundry application builds; in addition to the usual “AC_PROG_CC” and “AC_PROG_INSTALL” compiler directives we add LT_INIT & AC_SUBST([LIBTOOL_DEPS]) macros. Then the rest are checks, some of the sections are just comments. For example, there are no external libraries nessessary, so that section is blank. One additional option I think is important that goes with the variable substitutions are the options that can be passed to the configure script which will result from this file. If we were to make the module an option instead of the whole point of this we could add an AC_ARG_ENABLE directive such as:
AC_ARG_ENABLE([module],
[AS_HELP_STRING([--enable-module],
[Build chardev kernel module])],
[],
[enable_module=no])

This directs configure to accept the optional “–enable-module” switch. The AC_ARG_ENABLE macro is followed up with a AM_CONDITIONAL([BUILD_KMOD], [test "x$enable_module" != "xno"]) instruction. This “binds” the results of the previous test to the variable BUILD_KMOD, with a default of “no”, which will then go into the next file, “makefile.am”.

A quick word about AC_ARG_ENABLE; I’ve noticed variations or other forms or argument passing to the configure script, which leads me to question if this macro is up to date or not. I don’t know, and I’m frankly out of the energy to research it. I did a quick search to see if there were any glaring comments about it and I didn’t find anything. Obviously, do your own research. I’ve tried to use the most up-to-date autoconf macros as far as I can tell throughout, such as LT_INIT, but the info on this stuff is so obtuse and its really hard to keep track of everything, at least for me.

Returning to argument passing, if the option existed we would put a condition “if BUILD_KMOD” in the makefile.ac, and underneath we would put all the following code to build the module.

Every makefile.am should start with “ACLOCAL_AMFLAGS = -I m4” up to automake 1.12, after that we can do away with them as they’ll be depricated, but for now we need them else automake will complain about missing files. Next we have a directive that lists all the files that are part of the package but needn’t be installed (EXTRA_DIST). Then there a few macros, these are explained here. Now we get to the meat of the matter; what follows is pretty much a standard make file, with the addition of macros and varibles that are populated by the configure.ac file that tell the system where to put the driver’s udev rules, and our target build rules. We are using third-party make rules as described here. Hooks are described as “…guaranteed to run after the install of objects in this directory has completed.”, and apparently we need a hook to install the chardev module.

Lastly, in the source, or “module” directory, we have our source files, plus a Makefile fragment, the complete contents of which is our resulting object file as the target of the KBuild “obj-m” directive. Lastly there is an empty “.in” file which is needed to induce automake to place the other nessessary files in this directory, in this case its chardev.in. I’m not sure but I think it can be named anything. Finally create an empty m4 directory in our “top_build_dir”, or the top of the module tree, or autoconf will complain. Hopefully this will be changed soon in upcoming revs.

With the main components of this simple driver in place we can issue the command that generates all our supporting project files. In the top of the driver directory we issue “autoreconf -i”. It can take a minute depending on the size of the project, but soon you should see output regarding a number of files being created. The result should be the familure “configure” script that kicks off the configuration process. After configure is completed you know the drill. The should create a “chardev.ko” module in the “module” directory. Sudo make install and the module will be placed in the system modules directory and the udev rules file placed wherever is appropriate for your system. I’ve included a skeleton file here, all you need to do is issue the “autoreconf -i” command, “./configure”, and so on to build the project. I hope you’re as excited as I was to be able to do this using the autotool suite, its exciting to me to prove the vast majority of opinion is not exactly correct.

After I write a few shell scripts to help me manage my growing source archive I’m going to investigate if its possible to create kernel modules with CMake. Ciao!

Bluetooth on the Acer Aspire S3

03/12/2013 § 5 Comments

acer_aspire_s339153314g12addThis is the same exact model S3 I bought. I love the thiness it sports.

I was getting desparate to replace my aging Gateway netbook as my main Linux iron and was really pleased to find a demo Acer Aspire S3 available for a decent price at a local Office Depot. It has all the features of my old single core 64-bit Celeron-powered Gateway but the Acer has a quad-core i5 processor, and faster clock speed. Other than the beefier CPU its the same in all respects, even the RAM is the same 4 GHz, although at a faster FSB, of course. Another factor that was exactly the same is that both books didn’t appear to have bluetooth. No matter, I bought a cheap USB bluetooth adapter and was off to the races, for 6 months.

Dongles are kind of a pain, you have to remember to plug them in, and they are getting REALLY small these days so they can get lost. I already lost one since buying this rig. I considered ripping the bt chip out of my dongle and soldering it directly to the USB bus inside the case of my new Acer. But I always had a nagging feeling that the box that housed my new rig said something about it having bluetooth. I supressed the feelings for 6 months until I saw somewhere on the net that indeed, the Aspire S3 does have a bt chip inside.

Well, knock me over, what the hell have I been doing all these years? The old Gateway probably has one too. First thing one must do is make sure the chip is enabled by throwing down a Fn+F3 combo. When I found this and tried it I was almost ready to throw up. A “B” rune symbol appeared on my taskbar. But that’s just the beginnning. For whatever reason the stock Linux kernel’s bt driver code doesn’t include support for the Atheros AR3012 bluetooth chip, although it supports two of the third party adapters out in the wild. On this machine the chip is VAR’d by Lite-On and shows up as such in an lsusb listing. I’ve noticed the S3 has varying specs having sported an i3 AND an i7 in various incarnations so you’ll need to be sure of what you have in yours. Fortunately you can google the manufacturer and equipment number from the lsusb output and confirm who made your S3’s bt chip.

So you really can enable the on-board bt adapter, and its a fairly simple procedure assuming you are comfortable re-compiling your distribution’s kernel. If so I’m going to show you how to do it, step-by-step.

First, there are lots of instructions for recompiling your kernel on the net, and they seem to only grow in verbosity, not get simpler. Let me tell you; if you are dealing with a 3.x kernel the following steps should work for any generic distrubution. All flavors of linux work off the same kernel, the one Linus Torvalds maintains to this day (with help, of course), so recompiling it really should be the same accross all the distrubutions, and it is. These other extra steps seem to be geared to using some odd, unessessary feature that a particular distro offers. Certainly if these steps don’t work for you I’d appreciate a note with an explanation, but really, this recipe is probably going to work for most, and they are the most concise I’ve ever seen.

  1. sudo apt-get build-dep build-essential (on other distros use your package manager to get the tools you need to compile your kernel)
  2. Go to kernel.org and grab whatever version you feel comfortable with, 3.8 is out but I grabbed 3.5.
  3. Expand the archive into a scratch directory.
  4. Add the following variables to your shell:
    • export CHOST="x86_64-pc-Linux-gnu" (we're assuming a 64 bit architecture)
    • export CONCURRENCY_LEVEL=4
    • export CFLAGS="-march=corei7 -O2 -pipe"
    • export CXXFLAGS="$CFLAGS"
  5. Here's where we fix bluetooth for the Atheros chip: edit drivers/bluetooth/ath3k.c and in the table static struct usb_device_id ath3k_table[] add a line for your chip: { USB_DEVICE(0xNNNN, 0xNNNN) }, substituting NNNN for the manufacturer and device number for your chip from lsusb. Then look for static struct usb_device_id ath3k_blist_tbl[] and add { USB_DEVICE(0xNNNN, 0xNNNN), .driver_info = BTUSB_ATH3012 },. Save that file and open drivers/bluetooth/btusb.c, adding { USB_DEVICE(0xNNNN, 0xNNNN), .driver_info = BTUSB_ATH3012 }, to the static struct usb_device_id blacklist_table[] structure. Save.
  6. Issue "make oldconfig", and just hit return through all the options.
  7. "make all"
  8. "sudo make modules_install install"
  9. "sudo update-grub" (if you use grub)
  10. Reboot into your new kernel.

Your Acer Aspire S3 should be bluetooth enabled now.

OSS Development Can Be Pain

02/23/2013 § Leave a comment

Its really not THAT bad, i suppose.

Its really not THAT bad, i suppose.

Got a really good idea for what I believe is a marketable IT Utility for the Resturaunt Industry, so I sat down earlier this week to set up my tools and libraries, and roll up some prototype code. Its been a while since I’ve done any Apache module development, so I set about re-familiarizing myself with the code and procedures.

One thing I have a real stick up my ass regarding is documentation; it needs to be complete and accurate, or you don’t publish it. Publishing documentation that is innacurate or incomplete is a sin. Too bad so many web sites do it.

I get it if its just out of date; but innacurate is unforgivable. Stuff like “…then edit the file…”, so many times I see this, without any reference to “what” file is supposed to be edited, or edited with what…? Another thing that bothers me is when I need to track down instructions that leave off with out any discernable next step, and then I have to track down that next step on some other website. I think a lot of this is due to the nature of OSS, things change at the drop of a hat, but if you’re going to publish information, its of no use if its incorrect.

I want to jot down the steps I took to get an Apache 2 module working from source before I forgot, so here goes; we are assuming an apache module called “mod_module.c”; first install Apache 2 dev files by installing (I’m not supporting Apache 1, sorry) the apache extension tool; “apt-get install apache2-prefork-dev”, prefix the command with sudo if you’re not root, of course. Then write your way cool extension, the important bit is this; in a remarked section of the code place a structure similar to this:

/* MODULE-DEFINITION-START
* Name: mod_module
* ConfigStart
MOD_LIB="-L/usr/local/lib/somelib -lsomelib -lm -lz"
if [ "X$MOD_LIB" != "X" ]; then
LIBS="$LIBS $MOD_LIB
echo " + using $MOD_LIB for Mylib support"
fi
* ConfigEnd
* MODULE-DEFINITION-END
*/

You’ll of course be substituting the “mod_module” references for the actual names in your dependancy list. The important part, the part that took me a few tries to understand, is in Apache 2’s LoadModule directives. In your distribution’s apache2 “../mods_available” subdirectory you need to make sure there’s a “mod_module.load” file, make sure the name of the file corrosponds to the “Name” field in this struct, and the file should contain a LoadModule directive that looks like "LoadModule mod_module /usr/lib/apache2/modules/mod_module.so". You’ll then want to build the module with "sudo apxs2 -c -lsomelib -lm -lz mod_module.c", using the “somelib” reference if your module has a dependancy on some external library. You’ll end up with “mod_module.la”, this also threw me; you end up with a static library rather than a shared object. The next step creates the actual shared library MySQL will use; “sudo apxs2 -i mod_module.la” will build the shared library and place it where apache will find it. Then “sudo a2enmod mod_module” activates it. You also need to make sure that in your <sites-available>/<website>/”Directory” directives or your “.htaccess” file has whatever properties your module requires are in your site’s config file.

Believe me, it took over three days for me to figure out how to make this all work given the disparity, dry prose, and plain incorrectness of the textual instruction available on the net. I’m sure its correct, however I’m also sure I’ve missed something so if anyone out there tries these steps and can’t get it to work I’d appreciate feedback.

Two Recipes

01/22/2013 § Leave a comment

Image

I want to impart two procedures today; the first is the right way to configure the HP Linux Imaging and Printing Suite on Linux. The second is my take on the classic Taiwanese Oyster Omlette.

I purchased a new HP Photosmart 5520 so I could scan some forms to send an employer and decided I needed to get a network server printer since I’ve not had a console, or tower, pc in years. This way I can walk all around the house and not need to worry about connecting a USB port to a printer during the odd printing job. I also knew I wanted an HP printer since their Linux support, like Intel’s, is pretty generous.

So I get the thing home, unpack it, and set it up. It was pretty painless with my MacBook Pro Windows 7 Machine, as I assumed. Locating and installing the HP Driver suite for linux (HPLIP) was also pretty painless on my underpowered Gateway E1440u netbook as well. The thing about the book however is the installation was painless I’m sure due in no small fact that the OS install was mature and over time I have installed many (or all) of the dependencies the driver suite needed. Installing HPLIP on a new machine was painful though; the usual dependancy chain issues. APT is a great system, but why can’t it report the actual NAME of any dependancies it finds lacking for any particular application you want to install? This one feature would increase the usefulness of APT immeasurably.

There are several names for packages, there is the name that is useful, then there are usually one or more quite useless names for every package in every repository for every distro. Searching for the package that APT refers to and presents the user is usually not very useful. APT should report the ACTUAL NAME and minimum version of the missing dependancy its looking for, like this: “missing libgooblat-dev-3.3.4.6”; NOT “missing gooblat-devel”. It drives me crazy when it complains about missing a lib yet refers to the binary package name. YES, I know there are tools and commands that will tell you what the missing package(s) are but they aren’t widely published, even now I dread the effort that will be needed to locate them. And YES, I know Synaptic will resolve depandancies issues but this completely defeats the need. I have little love for fireing up Synaptic when I’m in the heat of installing stuff in bash. Oh, and I love searching through apt-cache; it can present up to 100 packages with variations on the name I’m searching for. Useless. The actual package name & version should be displayed by APT, and you know its possible, if APT didn’t deal with this information internally then it wouldn’t work; so all that needs to happen is the Debian dev team just needs to expose this info to the user.

I recently decided to invest in an Acer Aspire S3 for Linux duties. Its a pretty cool machine execpt for a few issues; the Linux distros I’ve tried (Mint and Bodhi) do not recognize the on-board bluetooth chip. There are a few remedies on the net and I’ll try them when I get a second. For one the battery life doesn’t last the specified 6 hours (more like 2 1/2), and I hate that I can’t upgrade the RAM from the onboard 4 Gigs, but otherwise I like the extra umph of the 4 i5 2467M processors a lot. With the later distros (3.x+ kernel) I don’t even have to recompile the kernel for SMP, at least as far as I can tell.

For whatever reason HP has seen fit not to enable network scanning in a NETWORK SCANNING APPLIANCE in their pre-compiled HPLIP package, so to get this going you need to follow the procedure below. As mentioned earlier I had some issues getting the HP LIP compiled and installed, twice, (installed multiple distros on the new machine to try some stuff), so I present here now how to do this to save you the hassle;

  1. First make sure you have the following installed; net-snmp-dev libcupsimage2-dev libsane-dev python-dev libusb-1.0-0-dev libusb-1.0-0
  2. Go to: http://sourceforge.net/projects/hplip/files/hplip and select the tarball appropriate, usually the latest. DO NOT download the precompiled package (like the .deb package for Debian platforms), you need to compile this to get scanning working.
  3. Extract the files and cd into the directory you extracted them to.
  4. Run the following command:
    ./configure –with-hpppddir=/usr/share/ppd/HP –prefix=/usr –enable-udev-acl-rules –enable-qt4 –disable-libusb01_build –enable-doc-build –disable-cups-ppd-install –disable-foomatic-drv-install –disable-foomatic-ppd-install –disable-hpijs-install –disable-udev_sysfs_rules –disable-policykit –enable-cups-drv-install –enable-hpcups-install –enable-network-build –enable-dbus-build –enable-scan-build –enable-fax-build

  5. Run make
  6. Run sudo make install
  7. Run hp-setup (as your user, no need to run as root) and configure your printer set up (make sure the printer is on and on line)
  8. Log out and log back in.

You should see a blue “hp” icon on your task bar now, and you should be able to print, scan, whatever.

Second thing I want to publish is a food recipe I’ve developed in my capacity as an amateur chef; Oyster Omlette. This is a classic chinese dish found mostly in Taiwan but also in Hong Kong and other Chinese parts of Asia but with a Japanese spin.

Japanese Oyster Omlette

Omelette Sauce:

  1. 4 Tblspn Shoyu (Soy Sauce)
  2. 3 Tblspn Zeisner curry ketchup, or plain ketchup + 1/2 teaspn curry powder
  3. Dash of cooking sake

Omlette:

  1. 5-6 raw Oysters
  2. 3 Eggs
  3. Bok choy, chopped
  4. Teaspn corn starch
  5. Clove garlic
  6. 1 Oz. Chopped mushrooms
  7. Make an Omlette with the above ingrediants

If you like curry and ketchup get a bottle of Zeisner’s. I get mine from my local Cost Plus food section, but you can order it from Amazon as well, but for more than the $4.99 I pay at the Cost Plus. Amazon has other curry ketchups but I’ve tried them and Zeisner’s is the best. I hope the above steps help you in your IT and culinary endevours.

Qt for Android!

12/22/2012 § Leave a comment

Qt logo

Its been months since my last post, but I’ve been so busy! Among doing my best to nurture a new romance and rounding up jobs its been incredibly hectic. Finally rounding up a really great job to do GUI programming using Qt, I’ve attacked this job like a dog after meat. I love Qt programming, its really great. I’ve spent the last few months learning more and more about this fantastic framework and I love it. I’ve always felt that messaging paradigms were the right key to distributed programming and the Qt development team has done a remarkable job porting their great framework from Linux to Windows.

Qt has been sold again, I guess from Nokia to Digia, before that it was its own company under the name “Trolltech, LLC”, I think. Obviously its been acquired a few times because the people who bought it knew it was developing a great product. Nokia’s recent troubles probably prompted the recent sale to Digia. Doesn’t matter who has it as long as they don’t mess with what’s working.

As well as being able to create great UI’s with this full-featured framework KDE has just trotted out a “pre-beta” (kind of) version of Qt for Android called “Necessitas”. What their relationship with Digia is I’m not sure, probably a licensing/legal thing as Qt has an open source/community version maintained by a semi-separate org, I think. I’m not sure how all the relationships are intertwined, I just know there are a number of people involved in creating a really great framework.

I really wanted to do some android development so last year I went through the great effort of installing the Google Android Development Kit and for me- it was hell on earth. Maybe I’m stupid, but I just found the whole thing really messy and hard to use. I’ve never been a big eclipse fan, but I use it when I must, and you must if you want to get up and running with the ADT quickly. Well, quickly is a relative term. After several weeks of installing things and adjusting settings I finally produced a fart app. The whole process was so painful however I really kind of dropped my enthusiasm for using THAT to create android apps.

In the meantime however I was popping out code for a desktop project with wild abandon. After months of struggling to wrangle android code with ADT/Eclipse I was actually having fun coding again. If I could get Qt on android… wow! They have it!

Its still in pre-beta release, as I said earlier, but it is usable. That doesn’t mean set up is a snap either. Let me give you the steps I used to get going and save you the trouble;

I installed this on my MacBook Pro Windows machine as my trusty Mint Linux box is too under powered to run the Android Virtual Device with any speed.

  1. Install the Oracle JDK and set up your JAVA_HOME variable and put %JAVA_HOME%/bin in your userspace path. 32 or 64 bit versions, it doesn’t appear to matter.
  2. Down load the Necessitas installer and run it. Take all default options. I don’t particularly care to have Necessitas in my home directory but I decided not to test fate on this one. If you have it installed and running elsewhere please drop me a line.
  3. This might be an unnecessary step, but I followed the advice of one poster and installed the latest ant build system on top of the one provided by the necessitas installer.
  4. BEFORE DOING ANYTHING ELSE run the adb.exe in ..\necessitas\android-sdk\platform-tools and let it download and install all the android kits you think you will be using. I installed everything it would let me. You also need to delete some old libs and replace them with new ones, just use the provided buttons. The whole process took at least two hours. Brew some coffee, play some Warcraft, whatever your poison.
  5. Locate the AVD Manager that should now be in ..\necessitas\android-sdk (it won’t appear here until you do the previous step) and set up at least one AVD, I recommend one at ADK level 2.3, I think that’s API level 10.
  6. Download the Ministro II runtime kit and the MinistroConfigurationTool II.apk from the same place to the same directory as the adb.exe in ..\necessitas\android-sdk\platform-tools
  7. Open a cmd shell in THAT directory and execute ‘adb install “Ministro II.apk”, and do the same for the configuration tool.
  8. Important Create a new project, don’t need to code anything, and then select the “Projects” tab on the left side of the IDE. Select the “Run” button next to the “add kit” and “Manage Kits…” button, and change the “Android target Configurations” under “Run Settings” from whatever it is to “android-10”.
  9. Run your project. You should see the Qt “Q” logo against a dark back ground, then a message about Ministro needing some libraries. Then you should see this;

    Capture

  10. This will go on for a number of minutes as Ministro downloads whatever it needs. This is a one-time deal for all your Qt apps. You will need to install Ministro on any device you want to run your Qt app on, obviously its the runtime manager for Qt apps on Android.

You should see your app in the emulator’s screen. Start coding your award-winning app with Qt. Its a pain in the ass to set up but I found it a bit less of a pain compared to app development with Google’s adt, which would be an on-going torture. Should improve and a lot of these steps should be unnecessary as Necessitas matures.

Where Am I?

You are currently browsing the Linux category at Twittech Conditional Behavior Modification.