The PHP Clusterfuck

05/16/2013 § Leave a comment

php

The above picture represents my opinion of PHP, everyone can use it, everyone can fuck up a project with. During the mid-2000’s I was picking up PHP gigs left and right and working ’em as fast as could, becuase everytime I picked up a new gig I learned a new aspect of the language I really didn’t like, but I just bit my lip and moved on with the task at hand.

I am now seeing some new-ish discussion regarding Jeff Atwood’s post last year The PHP Singularity, or not, I don’t know how I landed on his post, it just seems to me that the old PHP Sucks debate seems to be gaining a little steam again.

The refutations seem to fall into a few camps; 1) PHP is used by so many people now it doesn’t matter (Jeff’s post), or out-and-out plain ignoring the facts that demonstrate why PHP is such a horrible language, as one commentor on Jeff’s post did. He actually posted in reply (and in support of PHP) that he came away from Alex Munroe’s famous blog post PHP: a fractal of bad design without gaining any insight into why PHP was a bad language. Really? That’s VERY MUCH like reading “See Dick and Jane” and coming away asking “…but what were Dick and Jane REALLY doing?” Come on, pal. A critic hands you examples, and not just a few, but COPIUS examples that support his opinion, and you say “…but he really didn’t explain what his problem with the tool was.” Your love of the shitty tool is showing.

I should clarify my opinion and the above picture. I do not for a second mean to say that programming, indeed, the entire genre of digital engineering, should be an elitist, members-only, club. However, that certainly in no way means that a liberal arts major or a basket weaver should walk in the door and start coding up critical infrastructures either. But IN MY OPINION, and its just that, MY OPINION, PHP has allowed exactly that. Middle managers coding up crap. And I know of one example; in a crunch and down a man this middle manager rolled up her sleeves and attempted to complete a project coding up some missing parts of a PHP page, completed the project herself. And it passed QA. Great. Then, believing she would be able to complete the next project all the while saving on some man-hour cash coded up the entire thing herself. it crashed, burned, and a fire had to be put out due to her incompetence. Chances are she never would have attempted that had the project been written in python, ruby, or perl.

But that’s hardly a reason for rejecting a language, becuase some one might do bad things with it. And I won’t go into all the reasons why PHP is a bad language, that’s been done to death by people more elegantly or crassly than me. If you google up terms like “php is a bad language” you’ll dig up just as many, or more, links to pages that defend PHP. I’d really like to study those folks. Were they mass-hypnotized by PHP minions? How can so many people have drunk the kool-aid? It led me to question my opinion on the language. So I got back into it. About two minutes in I came back to my firm belief that I was right and all these people had indeed drunk the kool-aid. PHP is a terrible language. But Jeff Atwood asks us to stop and consider: “We get it already. PHP is horrible, but it’s used everywhere.” I reject this argument out of hand. Everyone can walk around with a dead bird carcass pinned to their lapels. That doesn’t mean I should as well. He then goes on disparage his reader by pointing out how obvious it is that PHP is a bad language and if you don’t know it you’re stupid. Interesting comment to put to your readers. In a second point he says that if you don’t like something make something better. This of course dismisses the many alterntives as “not good enuough”, apparently. Of which there are many, I’m not even to going to bother naming them.

If a client wants something written in PHP, they certainly may have it. In my experience, however, they don’t particularly care about the implementation, just the results, quickly. Sometimes integration with an existing tool or infrastructure dictates the implementation, or sometimes the customer wants a hetrogenous implementation, but not usually. After the mid-2000’s PHP extravaganza people seem to have mellowed on its use, at least in my world. Thankfully, I’ve not had to pick up the double-clawed hammer in quite some time. Thankfully.

Design Fail

05/10/2013 § Leave a comment

fail

As I get older in my old age I get very short with things. I don’t mean things & people in general, I’ve always gotten short with them, I mean things. Products. And I mean products that anyone might buy, rich or poor, so I’m speaking about products of average quality, or what should be average quality. And I don’t nessessarily mean quality of materials, I’m talking design.

These are failures that happen even after what is supposed to be some pretty extensive quality assurance testing, especially in the tech field. This is the product that causes my current ire:
mytouchThis is a T-Mobile-branded portable USB device recharger, I think the product line is called “MyTouch” or some such nonsense. I had a small Duracell recharger that I got a lot of good use out of, it was Well Designed, but too small, I needed something with more oomph, more amps, man. So the last time I was at my T-Mobile store picking up some screen protectors for my phone I noticed this thing and told the sales droid to put it in with my purchase. Bad descision. It has one major design flaw.

It looks and works great sitting on a desk. I wonder if that’s what the design team on this turkey was going for. Unfortunately that’s not a usefule feature in a PORTABLE CHARGER. The flaw is an obvious one too, I can’t believe this thing went through any quality control, it was probably made in China where I notice no one tasted the baby formula before it was shipped either.

This thing is most useful going with you somewhere out in the wide world. But the designers have thoughtfully added a really great feature; a charge meter, that is ok when you want to know how much charge is left in the thing. Actually, its really only useful for telling you if it has charge or not, I can’t imagine four LED’s can tell you much beyond “yes” or “no” charge in a recharger. I guess knowing that its approxiately half-charged is ok. To activate the meter you press on the top of the case. It doesn’t take much effort either for the pressure of a pocket inseam, netting of a backpack, or papers of a breifcase, to activate the damn thing. ANY PRESSURE ACTIVATES THE METER. Meaning by the time you get around to needing a recharge on the road its already been DEPLETED BY THE CONSTANT ACTIVATION OF THE NEARLY USELESS METER.

Its less than worthless, a portable recharger thats always needing to be recharged. The only way to avoid the constant depletion of this stupid gadget is to leave it alone on your desk at home, encased in acrylic, safe from the harm of the real world. This could easily have been eliminated with the addition of an on/off slide switch, which would prevent the meter button from activating the meter and depleting the charge. Thus I bestow upon the T-Mobile MyTouch USB Portable Charger design fail of the year.

Building Kernel Modules with Autotools

05/06/2013 § Leave a comment

Building Kernel Modules with Autotools.

Building Kernel Modules with Autotools

05/06/2013 § 2 Comments

System Call Graph - ApacheSystem Call Graph - IIS

The above graphs represent one of the reasons I really love Linux; the one with the incredibly horrible tangle of tenticles to system services is the system call graph of IIS, the other Apache. Whomever designed Apache’s architecture is a virtuoso of simplicity. You can see this same philosophy repeated many times in most of the tools and utilities in the stable of Linux’s OSS offerings.

On the other hand Linux has a vastly varied and espansive collection of tools, many that do the same tasks in very different ways. It can be a nightmare trying to tame all the different approaches to solving problems that the world of Linux developers takes on, not to mention all the architectures and hardware platforms Linux supports.

Autotools is one such tool. Autotools addresses the issues that are typically encountered when trying to create applications that can be deployed on many platforms. The main complaint regarding autotools, and certainly justified, are that it has a very high learning curve, its syntax is among the most cryptic of any digital tool out, and one that I’ve encountered time and again is that its not very well controlled with regard to versioning. Interestingly, at least in my experience, however, it seems to work very well, I’ve rarely encountered problems with it. When well implemented it really runs like a champ.

Replacement configuration tools have been proposed and created, probably the most famous is CMake, which works very well and is quite a bit easier to understand. But I really had my heart set on building a kernel module with autotools; 1) becuase I’m a glutton for punishment, 2) becuase autotools is the traditional way of doing portable things on Linux 3) its far older than any of other tools 4) has an very big array of options and is quite powerful, and 5) if I was successful I would have something really cool to write about on this blog. Autotools is not without its problems as very well documented by thousands of developers around the world though. One of the biggest problems I can see is that the autotools “suite”, as it were, are not very well synced. Or put it another way, not synced in a way that is easy for the end user to understand. Different versions of each tool can have adverse effects on the others and its not clear at all at what version of what works with what. For example on my Mint 13 system I have libtool version 2.4.2 dated 2011 and autoconf version 2.69 dated 2012; sounds good, right? But for some reason the series of commands that leads to libtool creating “libmain.sh” stopped working, I have no idea why, I didn’t install anything new with regard to libtool that I know of. In googling this situation I replaced my invocation from “libtool –automake” to just “libtool” followed by “autoreconf -i”, which got me back on track. My previous train of invocations, which was “aclocal”, followed by “autoheader”, then “libtoolize –automake” on down to “autoconf” just stopped working. I don’t know why. And I can’t really determine what invocations are old and what are recommended in the present state of these tools. I’ll show you how I got this to work, and maybe some one can enlighten me.

In researching this task I found a lot of web comments that went along the lines of “look, just use KBuild, forget autotools”… well, again, “interestingly”, what I’m going to show you doesn’t do away with KBuild, its nessessary for building kernel modules (as far as I know) no matter what your build system is, even if its a simple shell script. But the comments were disheartening, leading me to believe that my quest would end in fail. Then I happened upon a small kernel driver, LiMIC, which is some kind of driver for MPI node communication in Linux computing clusters. And in the tiny archive was a tiny example of a kernel driver using autotools. An example is worth a thousand words.

I sat down and tore the archive apart and examined every config file involved in the autoconf process for this driver. In only a few files lay the wisdom I’d been searching for. An example is good, but its not much better than worthless if its made up of thousands of files and tens of thousands of symbols. LiMIC is perfect, its a fortune cookie of technical wisdom. And the fruit of that wisdom I share with you today. Another point “interesting” (I shall try not to over-use that word) is that I’m surprised at the small number of tricks that must be used to get autoconf to work the way we want it to. A very tiny peice of a makefile must be placed in the source directory as well as a fake .in file, otherwise automake complains of a missing .in file.

First, download this archive. Inside you’ll notice a an .am file, an .ac file, and a “module” directory. In that is the source of the old chardev driver, the fake .in file, and the abbreviated Makefile.

The versions of everything involved in my effort are here:

  • libtool 2.4.2
  • automake 1.11.6
  • autoconf 2.69

Starting with the the autoconf.ac we have AC_PREREQ, which contains the version of autoconf we want. I have no idea what is supposed to go here, as I don’t know what versions of autoconf have what features I need. I simply put the version of autoconf I have installed on my system. Backreving that number might be a good idea, or it might not, I don’t know. After that is AC_INIT, this contiains fields such as the name of the project, the version, a project-related url, and I believe it can contains a number of other fields. I have no idea what else can go in there and I’m not too sure it matters. Followed by AC_CONFIG_SRCDIR, as the name says this tells autoconf where the top of the source tree is. Next we have AM_INIT_AUTOMAKE, this initializes automake (surprise). Murray Cumming, team lead of Gtkmm reccomends adding the “1.10 -Wall no-define” fields, with an optional “foriegn” added so we don’t have to deal with all the readme and info files as is the Gnu style of doing automake. I took that option. as well. This is followed by instruction telling automake to add the m4 make macros and what to call the automake porting header file. I’m really confused by the current state of m4 macro processor, it seems to have different behavior based on the versions of the other tools in the suite. Not unexpected behavior when dealing with technology, but I can’t begin to qualify or nutshell what behaves like what when invoked by the other. I just know I got this stuff to work.

Now we get to the important part; these are what distinquishes a kernel project from an application. First we use AC_SUBST directives to pass to the resulting make file where things are. In the next few directives we pass the kernel version and location, and some directives that differentiate a kernel module from sundry application builds; in addition to the usual “AC_PROG_CC” and “AC_PROG_INSTALL” compiler directives we add LT_INIT & AC_SUBST([LIBTOOL_DEPS]) macros. Then the rest are checks, some of the sections are just comments. For example, there are no external libraries nessessary, so that section is blank. One additional option I think is important that goes with the variable substitutions are the options that can be passed to the configure script which will result from this file. If we were to make the module an option instead of the whole point of this we could add an AC_ARG_ENABLE directive such as:
AC_ARG_ENABLE([module],
[AS_HELP_STRING([--enable-module],
[Build chardev kernel module])],
[],
[enable_module=no])

This directs configure to accept the optional “–enable-module” switch. The AC_ARG_ENABLE macro is followed up with a AM_CONDITIONAL([BUILD_KMOD], [test "x$enable_module" != "xno"]) instruction. This “binds” the results of the previous test to the variable BUILD_KMOD, with a default of “no”, which will then go into the next file, “makefile.am”.

A quick word about AC_ARG_ENABLE; I’ve noticed variations or other forms or argument passing to the configure script, which leads me to question if this macro is up to date or not. I don’t know, and I’m frankly out of the energy to research it. I did a quick search to see if there were any glaring comments about it and I didn’t find anything. Obviously, do your own research. I’ve tried to use the most up-to-date autoconf macros as far as I can tell throughout, such as LT_INIT, but the info on this stuff is so obtuse and its really hard to keep track of everything, at least for me.

Returning to argument passing, if the option existed we would put a condition “if BUILD_KMOD” in the makefile.ac, and underneath we would put all the following code to build the module.

Every makefile.am should start with “ACLOCAL_AMFLAGS = -I m4” up to automake 1.12, after that we can do away with them as they’ll be depricated, but for now we need them else automake will complain about missing files. Next we have a directive that lists all the files that are part of the package but needn’t be installed (EXTRA_DIST). Then there a few macros, these are explained here. Now we get to the meat of the matter; what follows is pretty much a standard make file, with the addition of macros and varibles that are populated by the configure.ac file that tell the system where to put the driver’s udev rules, and our target build rules. We are using third-party make rules as described here. Hooks are described as “…guaranteed to run after the install of objects in this directory has completed.”, and apparently we need a hook to install the chardev module.

Lastly, in the source, or “module” directory, we have our source files, plus a Makefile fragment, the complete contents of which is our resulting object file as the target of the KBuild “obj-m” directive. Lastly there is an empty “.in” file which is needed to induce automake to place the other nessessary files in this directory, in this case its chardev.in. I’m not sure but I think it can be named anything. Finally create an empty m4 directory in our “top_build_dir”, or the top of the module tree, or autoconf will complain. Hopefully this will be changed soon in upcoming revs.

With the main components of this simple driver in place we can issue the command that generates all our supporting project files. In the top of the driver directory we issue “autoreconf -i”. It can take a minute depending on the size of the project, but soon you should see output regarding a number of files being created. The result should be the familure “configure” script that kicks off the configuration process. After configure is completed you know the drill. The should create a “chardev.ko” module in the “module” directory. Sudo make install and the module will be placed in the system modules directory and the udev rules file placed wherever is appropriate for your system. I’ve included a skeleton file here, all you need to do is issue the “autoreconf -i” command, “./configure”, and so on to build the project. I hope you’re as excited as I was to be able to do this using the autotool suite, its exciting to me to prove the vast majority of opinion is not exactly correct.

After I write a few shell scripts to help me manage my growing source archive I’m going to investigate if its possible to create kernel modules with CMake. Ciao!

Where Am I?

You are currently viewing the archives for May, 2013 at Twittech Conditional Behavior Modification.