A Simpler MRU for GTKMM C++

08/15/2014 § Leave a comment

I guess I’m a glutton for punishment, or like the little bird who flies against the wind; I really enjoy developing desktop applications. Becomming something of a lost art, now its all about apps for phones and web applications. But I still really enjoy using my desktop and writing applications for it.

The complaint I most often hear from people opposed to adopting Linux as their desktop os is the lack of applications for it. Well, that won’t change until people start writing those apps.

As far as desktop application design goes, everything should be as simple as possible, or so m philosophy goes. All components of application design should follow that simple rule where its makes sense, and I see no reason for that principal to not be included in the humble RUF, or Recently Used File list, also known as the MRU, or Most recent, so on…

The MRU (from now on) is a tradition, should be available on any mouse driven GUI, and should be easy to implement in any toolkit or framework. Imagine my surprise when I started doing the research on using the MRU object as implemented in GTKMM; its pretty much the most complicated collection of classes I’ve ever seen. And to display only the recent files relevant to your application you need to employ some sort of filter, and then there’s all the complaints I read about those objects on the web, and after that I stopped worrying about it and wrote my own class.

I have a lot of respect for Murry Cumming and the GTKMM team, and the whole GTK+ project, its a huge effort to maintain a unified set of APIs and keep them working for a moving platform like GNU/Linux/Gnome, I am surely aware. I’m also aware that there are usually a huge number of underlying reasons as to why a developer or organization implements a feature set the way they do. But sometimes you just want a thing to just work the way you want them to.

When I got a little deeper into how GTKMM’s RecentChooser classes (there’s the base class Widget, a dialog object, an action object, as well as a menu object, then the filter you need to employ, and on and on) I simply shrugged my shoulders and told myself “I’m not doing that”. I get all the variations, obviously the action object is so you can add a history option to an edit menu, whatever. I just wanted the user to click on a dynamic menu that contained an MRU.

So with the history out of the way I bring you a simpler method using the STL and GTKMM’s own menu api:

My current application for the gnome Desktop is a classic Model-View-Controller impl with a Menu Bar, and of course under the File item is my MRU.

An MRU at the simplest level is a FIFO, and std::deque is perfect for that job. In my application’s data class (a collections of structs, really) reference to a std::deque object.

I started by adding some file menu entries, file 1..4, and binding them to a dummy function. I knew I could change the menu label and bind them to an actual function later (from my application’s Glib::ustring ui_info);
(word press is having conniptions with the xml-heavy Gtk UI string, so look in the sample code for this)
(And from my app’s Gtk::ActionGroup object):
m_refActionGroup->add(Gtk::Action::create("file1",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file2",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file3",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file4",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));

The “on_dummy” method is just an empty method, we need that because the ActionGroup template demands it be filled in, we’ll fill it in with the real method later.

My file open menu item, when activated, as well as opening a file, takes the path it got from a file picker and sends it to a method that inserts the file into the deque object, after checking the current size of the deque:

In a header file we have these declarations (more on the signal array later):
std::deque mru; // our deque object
sigc::connection mru_sig[4]; // dynamic menu signals

Then in the implementation file, in our “mru manager” method, app is just a pointer to an “application structure”, a struct with the deque object, among other things:

// If the deque is more than four, we need to pop one file off the que
if(app->mru.size() >= 4)
app->mru.pop_back();
// then add the new file
app->mru.push_front(str);
app->mru.resize(4);

Pretty simple stuff. Now, every time a file is opened it’ll be placed in our deque object, and round robin rotated to the bottom of the list ever time a new file is placed on it. In this case I’m keeping the number of recent files at 4, but it would be simple enough to adjust that number or make it user configurable if one wanted by adding an integer class member and using it instead of the “4” constant above.

Then comes the re-assignment of the menu signals, earlier in the method code I point to some empty Gtk::Widgets with a simple array of pointers:

Glib::RefPtr _refUIManager; // Typical GTKMM stuff
Gtk::Widget* file[4];

file[0] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file1");
file[1] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file2");
file[2] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file3");
file[3] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file4");

The paths refer to the ui file menus in the Gtk XML gui, if you’re familiar with Gtk desktop programming with GTKMM you should be aware of how that works. We’ll need these as references for the dynamic menus we’ll be connecting the activation signals to. Speaking of which, here’s how those signals are connected to our menu items;
int n = 0;
for(deque::iterator it =
app->mru.begin(); it mru.end(); ++it) {
const Glib::ustring& label =
(*it).substr((*it).find_last_of("//") + 1, (*it).length());
dynamic_cast(file[n])->set_label(label.c_str());
app->mru_sig[n].disconnect();
app->mru_sig[n] = dynamic_cast(file[n])->signal_activate().
connect(sigc::bind(sigc::mem_fun(*this, &ExampleWindow::on_mru), label));

if(dynamic_cast(file[n])->get_label().length() > 0)
file[n++]->show();
}

We iterate through our list of 4 file paths, take out the last bit for the name we display in the menus, and then we do a generic signal disconnect on the item. If we don’t signals will stack up on the item and we’d have several file paths flying at our “open file” method.

We then connect a new signal bound with the path data we want the menu item to open.

The signal method is simplicity itself:

on_mru(Glib::ustring& label)
{
deque::iterator it = _app->mru.begin();
for(; it mru.end(); ++it) {
const Glib::ustring& text =
(*it).substr((*it).find_last_of("//") + 1, (*it).length());
if(text.find(label) != string::npos)
cout << (*it).c_str() << endl;
}
}

The bound text from the label is searched for in the deque object, if we have a match we have our full path to the recently processed file.

There, a functioning MRU for a Gnome Desktop Application without the hassle of Gtk:: RecentChooser. Couple that with a way of serializing the list* for between sessions and you have a full fledged MRU feature for your app.

There’s always room for improvement; traditionally MRU’s have key accelerators “1, 2, 3…” and that would be a nice touch, and simple to add. And ideally this should be a fully encapsulated object, a drop-in, rather than the collection of methods to a main class as presented here.

The sample code shows a simple implementation using GTKMM 2.4, but I don’t believe there’s anything in the code preventing it being converted to 3.0. I hope this will help developers create more applications for Linux, we need all we can get!

There’s a very simple sample you can grab here, build it with:

g++ -Wall -std=c++11 examplewindow.cc main.cc -o menu `pkg-config --cflags --libs gtkmm-2.4`

*Keep your fingers crossed, I may publish a much simpler alternative to Gconfmm.

Advertisements

Design Fail

05/10/2013 § Leave a comment

fail

As I get older in my old age I get very short with things. I don’t mean things & people in general, I’ve always gotten short with them, I mean things. Products. And I mean products that anyone might buy, rich or poor, so I’m speaking about products of average quality, or what should be average quality. And I don’t nessessarily mean quality of materials, I’m talking design.

These are failures that happen even after what is supposed to be some pretty extensive quality assurance testing, especially in the tech field. This is the product that causes my current ire:
mytouchThis is a T-Mobile-branded portable USB device recharger, I think the product line is called “MyTouch” or some such nonsense. I had a small Duracell recharger that I got a lot of good use out of, it was Well Designed, but too small, I needed something with more oomph, more amps, man. So the last time I was at my T-Mobile store picking up some screen protectors for my phone I noticed this thing and told the sales droid to put it in with my purchase. Bad descision. It has one major design flaw.

It looks and works great sitting on a desk. I wonder if that’s what the design team on this turkey was going for. Unfortunately that’s not a usefule feature in a PORTABLE CHARGER. The flaw is an obvious one too, I can’t believe this thing went through any quality control, it was probably made in China where I notice no one tasted the baby formula before it was shipped either.

This thing is most useful going with you somewhere out in the wide world. But the designers have thoughtfully added a really great feature; a charge meter, that is ok when you want to know how much charge is left in the thing. Actually, its really only useful for telling you if it has charge or not, I can’t imagine four LED’s can tell you much beyond “yes” or “no” charge in a recharger. I guess knowing that its approxiately half-charged is ok. To activate the meter you press on the top of the case. It doesn’t take much effort either for the pressure of a pocket inseam, netting of a backpack, or papers of a breifcase, to activate the damn thing. ANY PRESSURE ACTIVATES THE METER. Meaning by the time you get around to needing a recharge on the road its already been DEPLETED BY THE CONSTANT ACTIVATION OF THE NEARLY USELESS METER.

Its less than worthless, a portable recharger thats always needing to be recharged. The only way to avoid the constant depletion of this stupid gadget is to leave it alone on your desk at home, encased in acrylic, safe from the harm of the real world. This could easily have been eliminated with the addition of an on/off slide switch, which would prevent the meter button from activating the meter and depleting the charge. Thus I bestow upon the T-Mobile MyTouch USB Portable Charger design fail of the year.

Building Kernel Modules with Autotools

05/06/2013 § 2 Comments

System Call Graph - ApacheSystem Call Graph - IIS

The above graphs represent one of the reasons I really love Linux; the one with the incredibly horrible tangle of tenticles to system services is the system call graph of IIS, the other Apache. Whomever designed Apache’s architecture is a virtuoso of simplicity. You can see this same philosophy repeated many times in most of the tools and utilities in the stable of Linux’s OSS offerings.

On the other hand Linux has a vastly varied and espansive collection of tools, many that do the same tasks in very different ways. It can be a nightmare trying to tame all the different approaches to solving problems that the world of Linux developers takes on, not to mention all the architectures and hardware platforms Linux supports.

Autotools is one such tool. Autotools addresses the issues that are typically encountered when trying to create applications that can be deployed on many platforms. The main complaint regarding autotools, and certainly justified, are that it has a very high learning curve, its syntax is among the most cryptic of any digital tool out, and one that I’ve encountered time and again is that its not very well controlled with regard to versioning. Interestingly, at least in my experience, however, it seems to work very well, I’ve rarely encountered problems with it. When well implemented it really runs like a champ.

Replacement configuration tools have been proposed and created, probably the most famous is CMake, which works very well and is quite a bit easier to understand. But I really had my heart set on building a kernel module with autotools; 1) becuase I’m a glutton for punishment, 2) becuase autotools is the traditional way of doing portable things on Linux 3) its far older than any of other tools 4) has an very big array of options and is quite powerful, and 5) if I was successful I would have something really cool to write about on this blog. Autotools is not without its problems as very well documented by thousands of developers around the world though. One of the biggest problems I can see is that the autotools “suite”, as it were, are not very well synced. Or put it another way, not synced in a way that is easy for the end user to understand. Different versions of each tool can have adverse effects on the others and its not clear at all at what version of what works with what. For example on my Mint 13 system I have libtool version 2.4.2 dated 2011 and autoconf version 2.69 dated 2012; sounds good, right? But for some reason the series of commands that leads to libtool creating “libmain.sh” stopped working, I have no idea why, I didn’t install anything new with regard to libtool that I know of. In googling this situation I replaced my invocation from “libtool –automake” to just “libtool” followed by “autoreconf -i”, which got me back on track. My previous train of invocations, which was “aclocal”, followed by “autoheader”, then “libtoolize –automake” on down to “autoconf” just stopped working. I don’t know why. And I can’t really determine what invocations are old and what are recommended in the present state of these tools. I’ll show you how I got this to work, and maybe some one can enlighten me.

In researching this task I found a lot of web comments that went along the lines of “look, just use KBuild, forget autotools”… well, again, “interestingly”, what I’m going to show you doesn’t do away with KBuild, its nessessary for building kernel modules (as far as I know) no matter what your build system is, even if its a simple shell script. But the comments were disheartening, leading me to believe that my quest would end in fail. Then I happened upon a small kernel driver, LiMIC, which is some kind of driver for MPI node communication in Linux computing clusters. And in the tiny archive was a tiny example of a kernel driver using autotools. An example is worth a thousand words.

I sat down and tore the archive apart and examined every config file involved in the autoconf process for this driver. In only a few files lay the wisdom I’d been searching for. An example is good, but its not much better than worthless if its made up of thousands of files and tens of thousands of symbols. LiMIC is perfect, its a fortune cookie of technical wisdom. And the fruit of that wisdom I share with you today. Another point “interesting” (I shall try not to over-use that word) is that I’m surprised at the small number of tricks that must be used to get autoconf to work the way we want it to. A very tiny peice of a makefile must be placed in the source directory as well as a fake .in file, otherwise automake complains of a missing .in file.

First, download this archive. Inside you’ll notice a an .am file, an .ac file, and a “module” directory. In that is the source of the old chardev driver, the fake .in file, and the abbreviated Makefile.

The versions of everything involved in my effort are here:

  • libtool 2.4.2
  • automake 1.11.6
  • autoconf 2.69

Starting with the the autoconf.ac we have AC_PREREQ, which contains the version of autoconf we want. I have no idea what is supposed to go here, as I don’t know what versions of autoconf have what features I need. I simply put the version of autoconf I have installed on my system. Backreving that number might be a good idea, or it might not, I don’t know. After that is AC_INIT, this contiains fields such as the name of the project, the version, a project-related url, and I believe it can contains a number of other fields. I have no idea what else can go in there and I’m not too sure it matters. Followed by AC_CONFIG_SRCDIR, as the name says this tells autoconf where the top of the source tree is. Next we have AM_INIT_AUTOMAKE, this initializes automake (surprise). Murray Cumming, team lead of Gtkmm reccomends adding the “1.10 -Wall no-define” fields, with an optional “foriegn” added so we don’t have to deal with all the readme and info files as is the Gnu style of doing automake. I took that option. as well. This is followed by instruction telling automake to add the m4 make macros and what to call the automake porting header file. I’m really confused by the current state of m4 macro processor, it seems to have different behavior based on the versions of the other tools in the suite. Not unexpected behavior when dealing with technology, but I can’t begin to qualify or nutshell what behaves like what when invoked by the other. I just know I got this stuff to work.

Now we get to the important part; these are what distinquishes a kernel project from an application. First we use AC_SUBST directives to pass to the resulting make file where things are. In the next few directives we pass the kernel version and location, and some directives that differentiate a kernel module from sundry application builds; in addition to the usual “AC_PROG_CC” and “AC_PROG_INSTALL” compiler directives we add LT_INIT & AC_SUBST([LIBTOOL_DEPS]) macros. Then the rest are checks, some of the sections are just comments. For example, there are no external libraries nessessary, so that section is blank. One additional option I think is important that goes with the variable substitutions are the options that can be passed to the configure script which will result from this file. If we were to make the module an option instead of the whole point of this we could add an AC_ARG_ENABLE directive such as:
AC_ARG_ENABLE([module],
[AS_HELP_STRING([--enable-module],
[Build chardev kernel module])],
[],
[enable_module=no])

This directs configure to accept the optional “–enable-module” switch. The AC_ARG_ENABLE macro is followed up with a AM_CONDITIONAL([BUILD_KMOD], [test "x$enable_module" != "xno"]) instruction. This “binds” the results of the previous test to the variable BUILD_KMOD, with a default of “no”, which will then go into the next file, “makefile.am”.

A quick word about AC_ARG_ENABLE; I’ve noticed variations or other forms or argument passing to the configure script, which leads me to question if this macro is up to date or not. I don’t know, and I’m frankly out of the energy to research it. I did a quick search to see if there were any glaring comments about it and I didn’t find anything. Obviously, do your own research. I’ve tried to use the most up-to-date autoconf macros as far as I can tell throughout, such as LT_INIT, but the info on this stuff is so obtuse and its really hard to keep track of everything, at least for me.

Returning to argument passing, if the option existed we would put a condition “if BUILD_KMOD” in the makefile.ac, and underneath we would put all the following code to build the module.

Every makefile.am should start with “ACLOCAL_AMFLAGS = -I m4” up to automake 1.12, after that we can do away with them as they’ll be depricated, but for now we need them else automake will complain about missing files. Next we have a directive that lists all the files that are part of the package but needn’t be installed (EXTRA_DIST). Then there a few macros, these are explained here. Now we get to the meat of the matter; what follows is pretty much a standard make file, with the addition of macros and varibles that are populated by the configure.ac file that tell the system where to put the driver’s udev rules, and our target build rules. We are using third-party make rules as described here. Hooks are described as “…guaranteed to run after the install of objects in this directory has completed.”, and apparently we need a hook to install the chardev module.

Lastly, in the source, or “module” directory, we have our source files, plus a Makefile fragment, the complete contents of which is our resulting object file as the target of the KBuild “obj-m” directive. Lastly there is an empty “.in” file which is needed to induce automake to place the other nessessary files in this directory, in this case its chardev.in. I’m not sure but I think it can be named anything. Finally create an empty m4 directory in our “top_build_dir”, or the top of the module tree, or autoconf will complain. Hopefully this will be changed soon in upcoming revs.

With the main components of this simple driver in place we can issue the command that generates all our supporting project files. In the top of the driver directory we issue “autoreconf -i”. It can take a minute depending on the size of the project, but soon you should see output regarding a number of files being created. The result should be the familure “configure” script that kicks off the configuration process. After configure is completed you know the drill. The should create a “chardev.ko” module in the “module” directory. Sudo make install and the module will be placed in the system modules directory and the udev rules file placed wherever is appropriate for your system. I’ve included a skeleton file here, all you need to do is issue the “autoreconf -i” command, “./configure”, and so on to build the project. I hope you’re as excited as I was to be able to do this using the autotool suite, its exciting to me to prove the vast majority of opinion is not exactly correct.

After I write a few shell scripts to help me manage my growing source archive I’m going to investigate if its possible to create kernel modules with CMake. Ciao!

Bluetooth on the Acer Aspire S3

03/12/2013 § 5 Comments

acer_aspire_s339153314g12addThis is the same exact model S3 I bought. I love the thiness it sports.

I was getting desparate to replace my aging Gateway netbook as my main Linux iron and was really pleased to find a demo Acer Aspire S3 available for a decent price at a local Office Depot. It has all the features of my old single core 64-bit Celeron-powered Gateway but the Acer has a quad-core i5 processor, and faster clock speed. Other than the beefier CPU its the same in all respects, even the RAM is the same 4 GHz, although at a faster FSB, of course. Another factor that was exactly the same is that both books didn’t appear to have bluetooth. No matter, I bought a cheap USB bluetooth adapter and was off to the races, for 6 months.

Dongles are kind of a pain, you have to remember to plug them in, and they are getting REALLY small these days so they can get lost. I already lost one since buying this rig. I considered ripping the bt chip out of my dongle and soldering it directly to the USB bus inside the case of my new Acer. But I always had a nagging feeling that the box that housed my new rig said something about it having bluetooth. I supressed the feelings for 6 months until I saw somewhere on the net that indeed, the Aspire S3 does have a bt chip inside.

Well, knock me over, what the hell have I been doing all these years? The old Gateway probably has one too. First thing one must do is make sure the chip is enabled by throwing down a Fn+F3 combo. When I found this and tried it I was almost ready to throw up. A “B” rune symbol appeared on my taskbar. But that’s just the beginnning. For whatever reason the stock Linux kernel’s bt driver code doesn’t include support for the Atheros AR3012 bluetooth chip, although it supports two of the third party adapters out in the wild. On this machine the chip is VAR’d by Lite-On and shows up as such in an lsusb listing. I’ve noticed the S3 has varying specs having sported an i3 AND an i7 in various incarnations so you’ll need to be sure of what you have in yours. Fortunately you can google the manufacturer and equipment number from the lsusb output and confirm who made your S3’s bt chip.

So you really can enable the on-board bt adapter, and its a fairly simple procedure assuming you are comfortable re-compiling your distribution’s kernel. If so I’m going to show you how to do it, step-by-step.

First, there are lots of instructions for recompiling your kernel on the net, and they seem to only grow in verbosity, not get simpler. Let me tell you; if you are dealing with a 3.x kernel the following steps should work for any generic distrubution. All flavors of linux work off the same kernel, the one Linus Torvalds maintains to this day (with help, of course), so recompiling it really should be the same accross all the distrubutions, and it is. These other extra steps seem to be geared to using some odd, unessessary feature that a particular distro offers. Certainly if these steps don’t work for you I’d appreciate a note with an explanation, but really, this recipe is probably going to work for most, and they are the most concise I’ve ever seen.

  1. sudo apt-get build-dep build-essential (on other distros use your package manager to get the tools you need to compile your kernel)
  2. Go to kernel.org and grab whatever version you feel comfortable with, 3.8 is out but I grabbed 3.5.
  3. Expand the archive into a scratch directory.
  4. Add the following variables to your shell:
    • export CHOST="x86_64-pc-Linux-gnu" (we're assuming a 64 bit architecture)
    • export CONCURRENCY_LEVEL=4
    • export CFLAGS="-march=corei7 -O2 -pipe"
    • export CXXFLAGS="$CFLAGS"
  5. Here's where we fix bluetooth for the Atheros chip: edit drivers/bluetooth/ath3k.c and in the table static struct usb_device_id ath3k_table[] add a line for your chip: { USB_DEVICE(0xNNNN, 0xNNNN) }, substituting NNNN for the manufacturer and device number for your chip from lsusb. Then look for static struct usb_device_id ath3k_blist_tbl[] and add { USB_DEVICE(0xNNNN, 0xNNNN), .driver_info = BTUSB_ATH3012 },. Save that file and open drivers/bluetooth/btusb.c, adding { USB_DEVICE(0xNNNN, 0xNNNN), .driver_info = BTUSB_ATH3012 }, to the static struct usb_device_id blacklist_table[] structure. Save.
  6. Issue "make oldconfig", and just hit return through all the options.
  7. "make all"
  8. "sudo make modules_install install"
  9. "sudo update-grub" (if you use grub)
  10. Reboot into your new kernel.

Your Acer Aspire S3 should be bluetooth enabled now.

Tina Fey Hates You (On Twitter)

03/10/2013 § Leave a comment

Tina-tina-fey-2388003-1707-2560Fey at another award ceremony that actors give each other like pez

Not long after the first presidential debate in October 2012, the (now) re-elected President Obama remarked to the press that he was somewhat taken by surprise by the adversarial tone pretender-to-the-throne Mitt Romney had taken with him. It seems that the President had been surrounded by Yes Men (and Women) for so long when some one finally was in a position to tell him “no” he was taken aback. I get that, easy to understand, and the President was of course intelligent enough to recognize this personal failing and snap back. By the next debate he was in top form.

What is the internet but a huge public platform that anyone can have access to? South Korean pop star “Psy” shows us that any no-talent, America-hating hack can suddenly have a flash of pop brilliance and become a sensation. I wonder what Psy thinks of America now?

Tina Fey is a talented comedic writer and actress, no doubt about it. Unfortunately she appears to suffer from the same myopic sickness that the MPAA and the 5 major record labels seem to suffer from; that is, the-cost-of-media-production-is-too-lowitess!!

The cost of mass media prodution in all sectors (film, audio, paper, all transmission, all forms) has dropped like a rock. As well the real price of fame has plummeted. Since the turn of the 20th century the cost of mass media production has fallen making fame well within the means of even the most modest of fame seekers. Before 1900, for centuries, entertainers were not the upper strata of society. In fact, actors, minstrels, dancers, anyone employed in the arts were considered the lowest of the low, pretty much on par with theives & beggars. It really wasn’t until the 18th century when opera was considered (and still is of course) among the highest of performing arts, and playbills advertised the skills of the top divas of the day. As the price of printing (media production) fell the fame of the actor rose, pretty much in direct proportion to the cost. Finally by 1900, with the advent of celluloid and the rise of the fan (or gossip) magazine, we see the emergence of American Royalty, the Movie Star. But media production continued to drop like a rock. Finally, in the 21st century, anyone can create nearly any kind of media unit and get it distributed to everyone for the price of a 6-pack.

It seems Tina Fey doesn’t like that. I noticed her comments on Twitter last month and promptly ignored them, but her arrogance kept nagging at me. Finally, I saw her speak on some peice of news fluff last week and my mind wandered back to that Twitter comment, and I decided I just had to poke the clown with a stick, even as tiny a one as I can manage. Don’t cost nothin’…

What Tina Fey does is certainly worth money to people who are in the business of media production. Some people call it “art”, and some of it certainly is, but unlike a cure for cancer, which either works or it doesn’t, beauty is in the eye of the beholder, making performing art much more subjective. If I have terminal cancer, I will die without a cure. If I lack the funny bone that makes Tina Fey the most amusing and interesting media personality in the world, I will live on to not laugh another day. NOT ONLY THAT but it severly diminishes her worth to me and people like me who don’t find her very amusing.

She is talented, no doubt. But she, along with Gwyneth Paltrow, has that attitude Obama recognized in himself as a flaw, that elitist “something”, only Fey has no idea that its not a good thing. Rather than celebrating those who acheive something good in the mass media industry and the abilbity to access it that low cost has given them, she’d rather throw the baby out with the bath water and see people jump over some kind of bar to get that access. It was a flip comment, to be sure, but its just plainly obvious where Fey’s beliefs are. Everyone is entitled to an opinion, even me. I suppose its to the credit of the American Media that when the market takes a tumble becuase another bank was found to have laundered 1.5 billion dollars of drug and terrorist money with the help of the Treasury Depertment they don’t immediately turn to Tina Fey to comment.

Tina, your a valued contributor of the arts and a terrific pop icon. But when it comes to the technological aspects of new media, in the words of Eddie Murphy, go have a coke and a smile and shut the fuck up.

I love that Paltrow doesn’t even seem to be phazed or hurt by the concept that saying bad things about your customers should be frowned upon. I guess even elitist assholes are right twice a day.

OSS Development Can Be Pain

02/23/2013 § Leave a comment

Its really not THAT bad, i suppose.

Its really not THAT bad, i suppose.

Got a really good idea for what I believe is a marketable IT Utility for the Resturaunt Industry, so I sat down earlier this week to set up my tools and libraries, and roll up some prototype code. Its been a while since I’ve done any Apache module development, so I set about re-familiarizing myself with the code and procedures.

One thing I have a real stick up my ass regarding is documentation; it needs to be complete and accurate, or you don’t publish it. Publishing documentation that is innacurate or incomplete is a sin. Too bad so many web sites do it.

I get it if its just out of date; but innacurate is unforgivable. Stuff like “…then edit the file…”, so many times I see this, without any reference to “what” file is supposed to be edited, or edited with what…? Another thing that bothers me is when I need to track down instructions that leave off with out any discernable next step, and then I have to track down that next step on some other website. I think a lot of this is due to the nature of OSS, things change at the drop of a hat, but if you’re going to publish information, its of no use if its incorrect.

I want to jot down the steps I took to get an Apache 2 module working from source before I forgot, so here goes; we are assuming an apache module called “mod_module.c”; first install Apache 2 dev files by installing (I’m not supporting Apache 1, sorry) the apache extension tool; “apt-get install apache2-prefork-dev”, prefix the command with sudo if you’re not root, of course. Then write your way cool extension, the important bit is this; in a remarked section of the code place a structure similar to this:

/* MODULE-DEFINITION-START
* Name: mod_module
* ConfigStart
MOD_LIB="-L/usr/local/lib/somelib -lsomelib -lm -lz"
if [ "X$MOD_LIB" != "X" ]; then
LIBS="$LIBS $MOD_LIB
echo " + using $MOD_LIB for Mylib support"
fi
* ConfigEnd
* MODULE-DEFINITION-END
*/

You’ll of course be substituting the “mod_module” references for the actual names in your dependancy list. The important part, the part that took me a few tries to understand, is in Apache 2’s LoadModule directives. In your distribution’s apache2 “../mods_available” subdirectory you need to make sure there’s a “mod_module.load” file, make sure the name of the file corrosponds to the “Name” field in this struct, and the file should contain a LoadModule directive that looks like "LoadModule mod_module /usr/lib/apache2/modules/mod_module.so". You’ll then want to build the module with "sudo apxs2 -c -lsomelib -lm -lz mod_module.c", using the “somelib” reference if your module has a dependancy on some external library. You’ll end up with “mod_module.la”, this also threw me; you end up with a static library rather than a shared object. The next step creates the actual shared library MySQL will use; “sudo apxs2 -i mod_module.la” will build the shared library and place it where apache will find it. Then “sudo a2enmod mod_module” activates it. You also need to make sure that in your <sites-available>/<website>/”Directory” directives or your “.htaccess” file has whatever properties your module requires are in your site’s config file.

Believe me, it took over three days for me to figure out how to make this all work given the disparity, dry prose, and plain incorrectness of the textual instruction available on the net. I’m sure its correct, however I’m also sure I’ve missed something so if anyone out there tries these steps and can’t get it to work I’d appreciate feedback.

Two Recipes

01/22/2013 § Leave a comment

Image

I want to impart two procedures today; the first is the right way to configure the HP Linux Imaging and Printing Suite on Linux. The second is my take on the classic Taiwanese Oyster Omlette.

I purchased a new HP Photosmart 5520 so I could scan some forms to send an employer and decided I needed to get a network server printer since I’ve not had a console, or tower, pc in years. This way I can walk all around the house and not need to worry about connecting a USB port to a printer during the odd printing job. I also knew I wanted an HP printer since their Linux support, like Intel’s, is pretty generous.

So I get the thing home, unpack it, and set it up. It was pretty painless with my MacBook Pro Windows 7 Machine, as I assumed. Locating and installing the HP Driver suite for linux (HPLIP) was also pretty painless on my underpowered Gateway E1440u netbook as well. The thing about the book however is the installation was painless I’m sure due in no small fact that the OS install was mature and over time I have installed many (or all) of the dependencies the driver suite needed. Installing HPLIP on a new machine was painful though; the usual dependancy chain issues. APT is a great system, but why can’t it report the actual NAME of any dependancies it finds lacking for any particular application you want to install? This one feature would increase the usefulness of APT immeasurably.

There are several names for packages, there is the name that is useful, then there are usually one or more quite useless names for every package in every repository for every distro. Searching for the package that APT refers to and presents the user is usually not very useful. APT should report the ACTUAL NAME and minimum version of the missing dependancy its looking for, like this: “missing libgooblat-dev-3.3.4.6”; NOT “missing gooblat-devel”. It drives me crazy when it complains about missing a lib yet refers to the binary package name. YES, I know there are tools and commands that will tell you what the missing package(s) are but they aren’t widely published, even now I dread the effort that will be needed to locate them. And YES, I know Synaptic will resolve depandancies issues but this completely defeats the need. I have little love for fireing up Synaptic when I’m in the heat of installing stuff in bash. Oh, and I love searching through apt-cache; it can present up to 100 packages with variations on the name I’m searching for. Useless. The actual package name & version should be displayed by APT, and you know its possible, if APT didn’t deal with this information internally then it wouldn’t work; so all that needs to happen is the Debian dev team just needs to expose this info to the user.

I recently decided to invest in an Acer Aspire S3 for Linux duties. Its a pretty cool machine execpt for a few issues; the Linux distros I’ve tried (Mint and Bodhi) do not recognize the on-board bluetooth chip. There are a few remedies on the net and I’ll try them when I get a second. For one the battery life doesn’t last the specified 6 hours (more like 2 1/2), and I hate that I can’t upgrade the RAM from the onboard 4 Gigs, but otherwise I like the extra umph of the 4 i5 2467M processors a lot. With the later distros (3.x+ kernel) I don’t even have to recompile the kernel for SMP, at least as far as I can tell.

For whatever reason HP has seen fit not to enable network scanning in a NETWORK SCANNING APPLIANCE in their pre-compiled HPLIP package, so to get this going you need to follow the procedure below. As mentioned earlier I had some issues getting the HP LIP compiled and installed, twice, (installed multiple distros on the new machine to try some stuff), so I present here now how to do this to save you the hassle;

  1. First make sure you have the following installed; net-snmp-dev libcupsimage2-dev libsane-dev python-dev libusb-1.0-0-dev libusb-1.0-0
  2. Go to: http://sourceforge.net/projects/hplip/files/hplip and select the tarball appropriate, usually the latest. DO NOT download the precompiled package (like the .deb package for Debian platforms), you need to compile this to get scanning working.
  3. Extract the files and cd into the directory you extracted them to.
  4. Run the following command:
    ./configure –with-hpppddir=/usr/share/ppd/HP –prefix=/usr –enable-udev-acl-rules –enable-qt4 –disable-libusb01_build –enable-doc-build –disable-cups-ppd-install –disable-foomatic-drv-install –disable-foomatic-ppd-install –disable-hpijs-install –disable-udev_sysfs_rules –disable-policykit –enable-cups-drv-install –enable-hpcups-install –enable-network-build –enable-dbus-build –enable-scan-build –enable-fax-build

  5. Run make
  6. Run sudo make install
  7. Run hp-setup (as your user, no need to run as root) and configure your printer set up (make sure the printer is on and on line)
  8. Log out and log back in.

You should see a blue “hp” icon on your task bar now, and you should be able to print, scan, whatever.

Second thing I want to publish is a food recipe I’ve developed in my capacity as an amateur chef; Oyster Omlette. This is a classic chinese dish found mostly in Taiwan but also in Hong Kong and other Chinese parts of Asia but with a Japanese spin.

Japanese Oyster Omlette

Omelette Sauce:

  1. 4 Tblspn Shoyu (Soy Sauce)
  2. 3 Tblspn Zeisner curry ketchup, or plain ketchup + 1/2 teaspn curry powder
  3. Dash of cooking sake

Omlette:

  1. 5-6 raw Oysters
  2. 3 Eggs
  3. Bok choy, chopped
  4. Teaspn corn starch
  5. Clove garlic
  6. 1 Oz. Chopped mushrooms
  7. Make an Omlette with the above ingrediants

If you like curry and ketchup get a bottle of Zeisner’s. I get mine from my local Cost Plus food section, but you can order it from Amazon as well, but for more than the $4.99 I pay at the Cost Plus. Amazon has other curry ketchups but I’ve tried them and Zeisner’s is the best. I hope the above steps help you in your IT and culinary endevours.

Where Am I?

You are currently browsing the Development category at Twittech Conditional Behavior Modification.