A Simpler MRU for GTKMM C++

08/15/2014 § Leave a comment

A Simpler MRU for GTKMM C++.

A Simpler MRU for GTKMM C++

08/15/2014 § Leave a comment

I guess I’m a glutton for punishment, or like the little bird who flies against the wind; I really enjoy developing desktop applications. Becomming something of a lost art, now its all about apps for phones and web applications. But I still really enjoy using my desktop and writing applications for it.

The complaint I most often hear from people opposed to adopting Linux as their desktop os is the lack of applications for it. Well, that won’t change until people start writing those apps.

As far as desktop application design goes, everything should be as simple as possible, or so m philosophy goes. All components of application design should follow that simple rule where its makes sense, and I see no reason for that principal to not be included in the humble RUF, or Recently Used File list, also known as the MRU, or Most recent, so on…

The MRU (from now on) is a tradition, should be available on any mouse driven GUI, and should be easy to implement in any toolkit or framework. Imagine my surprise when I started doing the research on using the MRU object as implemented in GTKMM; its pretty much the most complicated collection of classes I’ve ever seen. And to display only the recent files relevant to your application you need to employ some sort of filter, and then there’s all the complaints I read about those objects on the web, and after that I stopped worrying about it and wrote my own class.

I have a lot of respect for Murry Cumming and the GTKMM team, and the whole GTK+ project, its a huge effort to maintain a unified set of APIs and keep them working for a moving platform like GNU/Linux/Gnome, I am surely aware. I’m also aware that there are usually a huge number of underlying reasons as to why a developer or organization implements a feature set the way they do. But sometimes you just want a thing to just work the way you want them to.

When I got a little deeper into how GTKMM’s RecentChooser classes (there’s the base class Widget, a dialog object, an action object, as well as a menu object, then the filter you need to employ, and on and on) I simply shrugged my shoulders and told myself “I’m not doing that”. I get all the variations, obviously the action object is so you can add a history option to an edit menu, whatever. I just wanted the user to click on a dynamic menu that contained an MRU.

So with the history out of the way I bring you a simpler method using the STL and GTKMM’s own menu api:

My current application for the gnome Desktop is a classic Model-View-Controller impl with a Menu Bar, and of course under the File item is my MRU.

An MRU at the simplest level is a FIFO, and std::deque is perfect for that job. In my application’s data class (a collections of structs, really) reference to a std::deque object.

I started by adding some file menu entries, file 1..4, and binding them to a dummy function. I knew I could change the menu label and bind them to an actual function later (from my application’s Glib::ustring ui_info);
(word press is having conniptions with the xml-heavy Gtk UI string, so look in the sample code for this)
(And from my app’s Gtk::ActionGroup object):
m_refActionGroup->add(Gtk::Action::create("file1",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file2",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file3",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));
m_refActionGroup->add(Gtk::Action::create("file4",
"file1", "Reopen this file"),
sigc::mem_fun(*this, &ExampleWindow::on_dummy));

The “on_dummy” method is just an empty method, we need that because the ActionGroup template demands it be filled in, we’ll fill it in with the real method later.

My file open menu item, when activated, as well as opening a file, takes the path it got from a file picker and sends it to a method that inserts the file into the deque object, after checking the current size of the deque:

In a header file we have these declarations (more on the signal array later):
std::deque mru; // our deque object
sigc::connection mru_sig[4]; // dynamic menu signals

Then in the implementation file, in our “mru manager” method, app is just a pointer to an “application structure”, a struct with the deque object, among other things:

// If the deque is more than four, we need to pop one file off the que
if(app->mru.size() >= 4)
app->mru.pop_back();
// then add the new file
app->mru.push_front(str);
app->mru.resize(4);

Pretty simple stuff. Now, every time a file is opened it’ll be placed in our deque object, and round robin rotated to the bottom of the list ever time a new file is placed on it. In this case I’m keeping the number of recent files at 4, but it would be simple enough to adjust that number or make it user configurable if one wanted by adding an integer class member and using it instead of the “4” constant above.

Then comes the re-assignment of the menu signals, earlier in the method code I point to some empty Gtk::Widgets with a simple array of pointers:

Glib::RefPtr _refUIManager; // Typical GTKMM stuff
Gtk::Widget* file[4];

file[0] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file1");
file[1] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file2");
file[2] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file3");
file[3] = _refUIManager->get_widget("/ui/MenuBar/FileMenu/file4");

The paths refer to the ui file menus in the Gtk XML gui, if you’re familiar with Gtk desktop programming with GTKMM you should be aware of how that works. We’ll need these as references for the dynamic menus we’ll be connecting the activation signals to. Speaking of which, here’s how those signals are connected to our menu items;
int n = 0;
for(deque::iterator it =
app->mru.begin(); it mru.end(); ++it) {
const Glib::ustring& label =
(*it).substr((*it).find_last_of("//") + 1, (*it).length());
dynamic_cast(file[n])->set_label(label.c_str());
app->mru_sig[n].disconnect();
app->mru_sig[n] = dynamic_cast(file[n])->signal_activate().
connect(sigc::bind(sigc::mem_fun(*this, &ExampleWindow::on_mru), label));

if(dynamic_cast(file[n])->get_label().length() > 0)
file[n++]->show();
}

We iterate through our list of 4 file paths, take out the last bit for the name we display in the menus, and then we do a generic signal disconnect on the item. If we don’t signals will stack up on the item and we’d have several file paths flying at our “open file” method.

We then connect a new signal bound with the path data we want the menu item to open.

The signal method is simplicity itself:

on_mru(Glib::ustring& label)
{
deque::iterator it = _app->mru.begin();
for(; it mru.end(); ++it) {
const Glib::ustring& text =
(*it).substr((*it).find_last_of("//") + 1, (*it).length());
if(text.find(label) != string::npos)
cout << (*it).c_str() << endl;
}
}

The bound text from the label is searched for in the deque object, if we have a match we have our full path to the recently processed file.

There, a functioning MRU for a Gnome Desktop Application without the hassle of Gtk:: RecentChooser. Couple that with a way of serializing the list* for between sessions and you have a full fledged MRU feature for your app.

There’s always room for improvement; traditionally MRU’s have key accelerators “1, 2, 3…” and that would be a nice touch, and simple to add. And ideally this should be a fully encapsulated object, a drop-in, rather than the collection of methods to a main class as presented here.

The sample code shows a simple implementation using GTKMM 2.4, but I don’t believe there’s anything in the code preventing it being converted to 3.0. I hope this will help developers create more applications for Linux, we need all we can get!

There’s a very simple sample you can grab here, build it with:

g++ -Wall -std=c++11 examplewindow.cc main.cc -o menu `pkg-config --cflags --libs gtkmm-2.4`

*Keep your fingers crossed, I may publish a much simpler alternative to Gconfmm.

The Truth

02/20/2014 § Leave a comment

kennedy-jfk-in-car

Eight months after I was born John Fitzgerald Kennedy, the 35th President of the United States, was assassinated in Dallas, Texas. Any facts after this have been disputed, argued about, written about, filmed about, ad infinitum, for decades. I was a buff, an “enthusiast”, an assassination nerd, for a few years. Then I sat down with all the facts in front of me, put them together, and made sense out of them, while at the same time ignoring the hype and the nonsense.

Right about that time Oliver Stone made “JFK”, a film based in large part on the book by former New Orleans District Attorney Jim Garrison, I book I had read a few years before. I firmly place both book and film in the “nonsense” category and dismiss both. Dismiss also for a second the eyewitness accounts, the Warren Commission, all other 2nd hand accounts, witnessing, evidence. You should do the same as well- for the reasons I’ll detail shortly. I just want to be clear; any reasonable person who looks at the facts of the assassination would do well to do the same. There’s only one reasonable conclusion a reasonable person who examines those facts can come to.

Here are a list of to my mind the most important facts of the assassination that people tend to gloss over or dismiss in a myriad of ways, but are in fact the most damning for Oswald:

  • Marina Oswald confirmed taking the backyard photos of Oswald posing with the Carcano rifle used in the assassination
  • Marina also confirmed the same rifle was the one found in the Book Depository
  • Oswald’s prints were found on the rifle
  • Oswald tested positive for gunpowder residue with a simple paraffin (or “Dermal Nitrate”) test after being apprehended at the theater

Anything else just adds to the so-called mystery of the whole affair. Rather than being some dark conspiracy the incident is really just exactly as it appears; a horrible circus of incredible incidents that came together to form a perfect storm.

Oswald was a product of a broken home, and had a huge narcissistic monkey on his back. All his life he was driven to be more than he was, but he just wanted to will this fantasy into existence. He wasn’t willing to work for greatness. Over and over again with the tools to greatness well out of his reach he does things that are within his power to make that greatness happen. After defecting to the Soviet Union he is a celebrity, for a while. When that wears off he come back to the US. Then he thinks Cuba is his path to revival, and so on. He’s a sad sack until he sees his path to infamy. There is no way in any real world that any spooks or black hats would invest time, energy, or money using Oswald as any kind of Agent of Change. Oswald sensed that in himself, and so became an Agent of Change all by himself, with his own agenda.

Stone’s JFK is another piece of work. Its based on Garrison’s “On the Trail of the Assassins” which in itself is a big fantasy. The book is based on Garrison’s step-by-step destruction of a respected New Orleans business man Clay Shaw because Garrison saw a news paper article that mentioned Shaw attending a party that may have catered to homosexual tastes. He later finds drug addict Perry Russo who spins a tall tale about Shaw and David Ferrie, another New Orleans personality at a party where they apparently make plans to assassinate Kennedy.

Garrison is a strange character in his own right. He’s been involved in the darker corners of New Orleans politics himself having been indicted in an odd little pinball scheme, and over and over again bringing up charges on people in and around New Orleans who angered him in one way or another, only to get the charges dropped when outside light was shown on his accusations.

Perry Russo himself wasn’t able to corroborate his own stories about Shaw, Ferrie, and Oswald that became the basis of Garrison’s book.

So put Garrison and Stone’s flaming bag of poop aside for good. Here’s another factoid about the whole affair that never sat right with me: if it was a conspiracy, the very scope of it and the length of it would make it the most successful conspiracy ever. Think about the number of people who would need to be involved; from Lyndon Johnson, the Vice President, to the ambulance drivers in Dallas, the secret service, the Warren Commissioners. Earl Warren was involved in a cover-up of the assassination of President Kennedy?

Earl Warren was born in Los Angeles, CA in 1891, and grew up in Bakersfield, CA, where his father worked for the Southern Pacific Rail Road. He then attended UC Berkeley, then became a lawyer and joined the army in 1917. After coming home to California he developed a reputation as a corruption-fighting DA. As he rose higher in the state legislature he continued to hone this reputation, being known to be honest to a fault, and a liberal crusader, as shown by his leadership in Brown v. Board of Education, a landmark decision. He also worked hard to nationalize the Bill Of Rights, and with Griswold v. Connecticut he showed that US Citizens have a constitutional right to privacy. Legal and legislative accomplishments aside, Warren had a reputation for being a frankly honest man, period.

Point of all this is; do you really expect me to swallow that Warren was part of a conspiracy to assassinate Kennedy, something he most certainly would have had to been a part of? I don’t think so.

Ruby off Rails is Retarded

10/30/2013 § Leave a comment

timthumb.php

I can’t think of a bigger reason to ditch a tool than because it doesn’t do what you want it to do, or because the user can’t get it to do what he needs it to do. I have no use for nonsense like that. But worse, I have no use for a tool that is pretentious. And that’s how I feel about RoR.

I mean, that the hell is this crap?? EVERYONE is doing RoR, everyone is adding “gems” (or ruby libraries) to its related repositories, every Tom, Dick, and Harvey is writing Tech Articles about the wonders of RoR. Its the new wonder widget for the Web, as far as I can see from the volume of articles on the web regarding it. But I didn’t find it so, and I’ll tell you why.

RoR is pretentious, and presents itself like the second coming. You can see this from the sheer volume and tone of the tech articles written about it. Its not difficult to find example code and how-tos using ruby to accomplish many different things on the web. Its a web language, there’s no doubt about it. But everyone using it seems to me to have a tone of “Well, to do that you just blah blah blah…”, and indeed, there does some to be a “thing” for every “thing” you need done on the web. But the actual application of the solution…

Ruby, in my opinion, is horribly fragmented. You can’t say it any plainer than that. Just following a simple step-by-step for newbs yielded different results for me on different machines, and I could have sworn I installed Ruby the same way on both machines, both the same linux distribution even. Logically speaking, repeating the same actions should yield the same results on the same versions of the platform. But in my case it didn’t. Clearly there were slight differences in *something* regarding the platform (Linux Mint 15 “Olivia”), but any differences in the tool chain or the clib or anything else I couldn’t say, and I shouldn’t have to. Yet on my successful platform I had a simple example web site working, and on the other I had stack traces after issuing “rails server”, or at least what looks like stack traces, I don’t know what they are called in RoR.

“Ok, never mind that.” I said, and proceeded to ask of Ruby a very essential, yet non-trivial, web problem.

Many websites need to be a portal, or a gateway, that protects it’s resources from as-hoc use, that is to say, it needs to recognize it’s registered users. Resource protection is not a trivial problem; how do you keep random Web surfers out of things you don’t want them messing with when the http protocol is stateless? You can access a page on the web by simply typing it’s URL into the browser, and the browser does it’s best to present the page (or resource) you’ve requested. Something else has to keep it from the browser’s request. That “something” is the heart of any web portal, and needs to be designed carefully. It’s inherent complexity makes it a common attack vector for hackers and exploiters.

The point is, that “thing” is very important, and not something you write off-the-cuff. So I proceeded to look for something that had already been written. I didn’t think this a would be a chore with all the add-ons and cruft that’s already been written for Ruby, this should have been a snap, right? And boom, a google search yielded a butt-load of urls with ready to go portal gems, gems that would use mySQL or PostgreSQL authentication, gems that used openID (yay!), and other stuff (radius, for example.) So with a song in my heart I downloaded one that seemed reasonable. *Boom*. Stack trace. Something about the version of ruby, or something in ruby. Or rails. Or a gem. I dunno. So I located another one, and deployed it. *Boom*, stack trace. I repeated this act several times until I finally landed on something that didn’t result in a stack trace. Since there were no instructions to speak of I navigated to the root of the system. I got the default red-trimmed rails server page. Great. After an hour of screwing around with how-tos I got to a point where I wanted to encrypt the user password input. “Just add the bcrypt gem and ta da da da day daaaa.” *Boom*, some kind of version issue with the bcrypt gem.

At that point I gave up. At least for now. Obviously this stuff works, I just don’t have the snuff to make it happen, but this absurd fragmentation in Ruby is for the birds. And I know that’s the problem, I can see from the stack traces. Every problem that comes up is due to something not liking the version of something else. Its plain from the errors. For all the issues I have with PHP at least I was able to get a basic web portal up and running in no time.

You Got Fleeced and You Don’t Even Care

10/23/2013 § 2 Comments

obama“Mine will be the most transparent administration in history”

The principals behind banks are something everyone can understand. They hold your money for you, and pay for the privilege, current interest rates not withstanding. The banks that service checking accounts for most people are retail banks, and they are strictly regulated and heavily insured, an important by-product of the disaster of the Great Depression. But how many really understand the Credit Default Swap scandal of the last 7 or so years?

Other less regulated banks are investment banks, these banks are a bit freer to take bigger risks with the potential for more profit. When retail banks start doing things like take those same risks things like the Great Depression happen, which is why the Glass–Steagall act was born. This legislation strictly separates the activities of investment banking from retail banking; when an investment bank sinks it hurts investors that had the money to blow anyway. When a retail bank goes down, it takes people like you and me with it.

Interestingly, starting from the ’60′s on legislators started chipping away at Glass-Steagall, culminating in the Gramm-Leach-Bliley act of 1999, also known as the “Financial Services Modernization Act of 1999″, intended to address the cloudy “realities of modern finance”, and the most sweeping blow to the protection of Glass-Steagall to that time. Banks starting dabbing their collective kerchiefs into the risks and rewards of investment banking with plain account holders money. All three congressmen who introduced the bill were republicans by the way.

Because of the rendering of the Glass-Steagall act into a gutless cube of butter something interesting in the real estate market started happening; because banks were able to dabble in investment banking they were able to create “investment packages” that they could sell off to other institutions, and some of those packages started including real estate loans. There is no downside to this for the banks at all as the loans go completely out of their hands. Buyers of these “investment” vehicles don’t have a real issue with them; if the bundle contains a few bad loans, so what? Besides, these “bundles”, or Mortgage-Backed Securities could be re-sold, usually for a huge profit, and re-sold, and so on. As the immediate downside for the banks was none and the profits were many, there was little problem (as far as they saw it) to shoveling mortgages out the door by the truckloads masked in these so-called “Securities.” But to make mortgage-backed securities, you need real estate loans. Queue the Subprime mortgage crisis of 2007-08. “Give the people loans”, said representative Barney Frank of Massachusetts, so the banks complied by giving anyone a real estate loan, and covering their asses by selling off the loans in these securities.

Its difficult to believe the officers, CEOs, CFOs, and other higher-ranking officials of the institutions didn’t know exactly what was going on. But the pure profits were too difficult to ignore, obviously. By the way, not one of these men and women have been indicted or made to feel any effects for causing the biggest financial disaster since the Great Depression. Not one that I’m aware of by the way.

But the biggest insult came when Obama decided the thing to do was send truckloads of money to the very banks that initiated the crisis. Billions in bailout tax money, your money, was handed over to the banks that were most exposed to the credit default swap scandal, as its come to be known, with the understanding that these banks would start making loans again. But they didn’t, they sat on the money. Its your scandal, you paid for it, you enjoy it.

Fun With DBus

07/14/2013 § Leave a comment

DBus

An example is worth a thousand words, that’s always been my motto, so when I needed to do some DBus digging I was stymied by the usual lack of examples. There is documentation, but like so much of it in the OSS world its pretty dry, obtuse, and difficult to follow. I looked at several examples around the web and most of them were non-working regurgitations of the examples in the sdk written for Qt. And of course the typical badly-written, broken english nonsense that pervades the web. The example I linked to just now is perported to have been written just 8 months ago! Also, what the hell is going on with Qt!?!?!?! I just installed Mint Linux in a new MacBook Pro Retina as described in my last article and one of the things I do is install my favorite IDEs; usually Qt Creator and Anjuta, but man, what the hell is going on? Anjuta works as it always did but there are these huge black spaces in the IDE as though some graphical component is missing, and moving from Qt 4 to 5 have been very painful. The cardinal rule in the development of user tools should be that certain files are sacrosanct and not to be messed with, or at least backwards-compatible. The Qt project file (*.pro) is one of these files. I spent hours last night and could not for the life of me get an older project up to v. 5 standards. I’m going to try to re-create the project from scratch and import the source files when I can but I shouldn’t have to do that.

So I took the echo example out of the DBus-c++ sdk and sort-of modified it for my own needs (all that means is I re-named some files and created my own Makefile.) Since I installed the sdk with Synaptic and I have no idea what I’m supposed to do with them since they are part of an automake package and I only want the one example I decided to extract them to my home directory, analyze their needs, and like Maslow or Freud, prescibe them long hours of therapy. The result is this archive that compiles on my Mint 15 (3.9 kernel) system and works, with a plain Makefile.

I’m really interested in exploring the DBus systems and probing its capabilities. Now that I have an example that I know works, at least in my context, I can do just that.

I Did It Again

07/09/2013 § Leave a comment

IMG_8444The Retina 13″ is on top, the previous MackBook Pro I have is in the middle, I think

I’m so ashamed. I swore off Apple products for ever and here I am again with a new McBook Pro Retina 13″.

I love my Acer S3, but it has problems. Its got 4 Gigs ram, total. The keyboard is prone to spurious typing anomalies (broken words, typing errors, lots of them). The resolution is really low, even for an ultrabook in 2013. The battery lasts 2 hours on a full charge, 2 1/2 if you really pack it. In 2013 those stats are rediculous. Plus I’ve had a banner year so far so I had some spare bux burning a hole in my pocket.

First I went to the nearby Fry’s Electronics and took a look. What I look for in an ultrabook is light-weight and power. I look for the lighest book with the most Ghz I can get. Then I look for RAM, expandability would be nice but that’s REALLY hard to find in an ultra. So, given that the RAM will be static in size I try for the most I can get. That’s also hard. It was impossible to find an ultra with more than 4 Gigs two years ago, ALL the manufacterers were worried about price plus meeting the minimum specs for running Windows 7, so 4 Gigs was the most they were willing to fit the new, hot-selling ultrabook phenomenom with. Now that things are a little more relaxed its easier to find ultras with 6, and even 8 gigs. Another thing I crave is low weight. I know I ask a lot but as a consultant I travel a lot and weight is serious consideration. One thing I really don’t need is a book with a light drive (you know, a CD/DVD drive.) I needed to use one last year to install Windows XP on an old but tiny pc I wanted to use as a media server, but before and after, rarely. If you feel like you need to use plastic light media for anything you need to get aquainted with modern SD Multi Media memory devices. Ever breath on a CD and all of a sudden not be able to read it? I have) yet they were difficult to find, being larlgely relagated to the Japanese market. Lately however that hasn’t been as much of an issue and light-driveless books are easy to come by here in the states.

At the Fry’s nearest to my house I wandered about the notebook aisles until I spied a really great number that met all my criteria. It in fact looked a bit smaller than typical ultrabooks, but at 8 Gigs RAM it would have worked quite well, and I wanted it.

Is there anything worse than a retail store that won’t sell you something? I don’t think so. I found a sales droid and showed her the ultra I wanted to purchase. She spent the usual 10 minutes fumbling about doing who knows what and finally came back and told me should coudn’t sell it to me. I asked her for the display model. She said she couldn’t sell me that one either. Seeing red I left the store. I should have looked on-line for the model and probably would have gotten it cheaper but I was really pissed off. I was on a mission now.

If you’re familure with Fry’s you know its the one retail brick store that, like Mitt Romney’s “binders full of women” has aisles full of notebooks, there’s really no other place like it. The help is utterly worthless but the sheer number of models on display can’t be beat. The only other place better WAS CompUSA, may that establishment rest in peace. So my only other shot, though I was loathe to take it, was another Fry’s. So I decided to haul my butt to the next nearest one, which happens to be the Fry’s in Palo Alto. THE Fry’s. A Fry’s in San jose is certainly near the pulse of Silicon Valley, but the Fry’s in PA would be in the Valley’s heart beat. This is near Stanford University and Page Mill Road, the valley’s trail of venture capital repositories. THE Fry’s did indeed have a number of models available on display, but not the make/model of the one in San Jose that I wanted. But what it did have was a full selection of MacBook Pros with the Retina display. I took a look at the Retinas. Damn the display was pretty. They had both MacBook Airs and the “classic look” Pro models, the new ones. The smallest one caught my eye; it was just like my older MacBook Pro but considerably smaller, and with that increadible Retina display. I also knew that my keyboard issues with the Asus would be completely gone. The crisp MackBook Pro kb design is probably the best in the business. I also knew that I would have problems running the software that *I* wanted to run on it. The latest MacBooks use the new Intel boot process known as Unified Extensible Firmware Interface, or UEFI, and like anything unknown the human reaction is to fear it. Which I did, but its the replacement for BIOS, and not going away. It also complicates Linux installation. Thankfully it doesn’t prevent it, which I first feared, it simply complicates it.

In an effort to be both entertaining, relevant, AND useful let me breifly summerize the process of installing Linux on a Retina. And let me preface the process by explaining that I have absolutely NO use for MacOS, sorry mac fan boys. And I have a larger MacBook that runs Windows 7 when I need that, I also stuffed 16 Gigs of RAM in the thing so I use it for running virtual machines (usually other versions of Ubuntu, the embedded & thin client world is going nuts for Ubuntu for some reason). What I wanted was a small, light, powerful book for traveling with MORE RAM. Since most of my work is on Linux, that’s what I wanted to run.**

First thing you’ll want to do is install rEFInd, and use the “binary zip file”. Don’t get too caught up in the wordy web page that is the rEFInd home page; the author spends WAY too much time explaining the story of rEFInd in tangents. After resizing your disk execute the install.sh script as root using the “–esp” and “–drivers” options. I’m not sure that the drivers option is absolutely nessessary, but the esp one is. If you don’t specify it refind won’t get installed on the disk and when you reboot the machine Linux won’t boot. I went ’round and ’round on that one. Then reboot with your Linux distribution ISO of choice written on a plugged-in USB dongle. There are some instructions on the net saying you need to write the ISO in a special way for MacOS, I didn’t find that to be true. You should see a new boot manager menu with an Apple logo and a generic USB symbol as button selections. This is the rEFInd boot manager. Select the USB option. Your choice of Linux should be a fairly recent so as to take advantage of the EFI boot process, if you insist on using an older distribution you’re on your own, I have no idea what BIOS-based distributions work on the EFI system of the MacBook Pro Retina. After the dry run system (if your distro has a test drive desktop, I think most do now) boots up go ahead and double click the install icon. Installation is the same as always, but be very aware of what you are doing during the disk editing part of the install; you’ll be presented with a gparted (or whatever they do with KDE based distros) dialog. Go ahead and partition the main slices however you want; BUT DO NOT DELETE THE EFI PARTITION. If you want to use the Linux as your sole OS on the Retina thats fine as long as you do not touch the ~200 Meg boot partition at sda1, or whatever device node your boot disk is (usually sda1 on Debian systems). This is the partiton that should clearly be labeled “EFI” in the gparted partition list. I wanted to use this book soley for linux, so when I got to this step I blithely deleted all partitions and created a main slice and a swap area, which normally would work fine. I installed Linux (Mint in my case) and when I re-booted: NOTHING. The machine wouldn’t load Mint.

After doing some research I learned about the newer EFI boot process, that rEFInd was needed to install a new boot loader, and that you don’t want to re-construct an EFI boot partition from scratch. After messing around with re-creating EFI boot partition structures for 3 days (They have to be a certain size, have a certain directory structure, have certain files…) I finally re-installed MacOS Mountain Goat* or whatever and re-tried my Linux installation, this time without messing with the EFI partition. It worked like a charm, my new Retina was running Mint 15.

Here’s some after install pointers, points; I had to install and open up the curses-based alsamixer app and unmute all the sound devices, simply uping the volume controls or messing with them in any way using the usual gnome controls didn’t give me my sound. I also edited /etc/modprobe.d/alsa-base.conf and added “options snd-hda-intel model=mbp101″ as the last line in that file. The HDMI port on the right side doesn’t appear to work unfortunately, and neither does a minidisplay port to HDMI adapter. I was really looking forward to having HDMI out. I don’t know if a miniport to VGA or DVI dapter will. Also this book appears to have two display adapters, one from Intel and one from nVidia; don’t install any of the many nVidia driver options available in the repositories, they don’t appear to work, while the Intel driver works great. Its kind of wierd getting a full 2560×1600 resolution on a 13″ notebook LCD. That resolution is so high that I had to step on it a bit to make everything readable. I re-compiled a mandelbrot generating X app I wrote that also prints the execution time in the shell if its launched from that and running it on the Asus took about 9 seconds; on the Retina it takes 5. I get the sense also that this thing has four full core i5 @2.5 GHz processors, not just two real and two virtual ones. I’ve also read reports of the Retina running very hot on Linux, but I’ve not noticed this.

The 13″ Retina is a very powerful ultrabook, a true “Ultra”. I love it. Its really the perfect size with the perfect power and RAM. It’ll run at least twice as long on a full battery charge as my trusty-but-slower Acer S3. I’m looking forward to doing a lot of work on it. I hope linux developers down the road get the ports working, but that’s not going to hold me back.

UPDATE: I spent the latter half of my yesterday building and installing the 3.9 kernel and some Intel support libraries and viola! The HDMI port works!!! I’m staring into the warm glow of my Vizio 26″ HDTV as I type this. Its funny, the Retina’s LCD is STILL higher rez than the Vizio, but its nice to have a “console” sized display. The MicroSD slot on the right works too! I LOVE THE RETINA!! Pricey, and locked down as far as RAM & SSD go, but I’ve come to live with that from Ultras. If you’re looking to run Linux on the 13″ Retina, follow the above directions and then grab the 3.9 kernel and install it. Also grab the intel graphics stack components here. After installing everything (yes, I went ahead and compiled everything from source, getting missing libraries from the baseline repositories when they popped up) I had control over my HDMI and SD ports.

* I have to say that Apple really saved my ass in this regard; the 13″ Retina (and I assume all the latest Pros) don’t come with much in the way of paperwork or media, almost none at all in fact. Just the usual worthless warranty “square”. There is no Mac OSX install disk, nothing. Just the MacBook and that funky, little white power supply. Scary, but in some ways refreshing for a faux minimalist such as myself. Re-installing Moutain Lion was a simple matter of hitting an option-R key combo during the boot process, using the disk utility to re-partition the drive the right way, and then selecting the Mac OS re-install option. Apparently, since I had already configured the book to use my wifi it simply retrived that configuration from *wherever* and went to town. After a warning that the re-install process would be slowed by my use of wifi (a hard ethernet connection would obviously be faster, but who cares?) it automagically just connected to an Apple server (I assume) and re-installed Mountain Lion. The whole thing was really kind of amazing from a geekly perspective and very easy.

** The Apple droids will say that MacOS is a version of Linux. No, its not. It resembles it in better then superficial ways, but its not.

Follow

Get every new post delivered to your Inbox.