37signals, entropy, and Sakai

March 29, 2008

Nathan Pearson, Sakai’s lead for the UX Initiative just forwarded a post from the 37signals blog. It’s referencing a video interview with Ira Glass, where he talks about being a fierce editor and moderator, cutting more tape than you roll. The post extends the interview’s mention of entropy as the disorganizing, enemy force to software.

In principle, I agree: entropy tends to bloat, delay, and complicate software. But the read-only experience of listener/viewer in storytelling is a bit different from daily use software. The post reveals some of the company’s “less is more all” philosophy.

I give them credit for about half of the simplification and clarity we’re seeing on the web in general. All told, Rails is pretty nice, and they don’t get enough credit there. Putting the spoils of the past 5 years of practice evolution in front of the new generation of tinkerers is a very good thing. Specific to their apps, I love that they push products that do something very small, very well.

I believe that being a fierce gatekeeper, refactoring mercilessly, and outright trashing old/bad stuff is paramount to evolution of a system. But I also believe that 37signals has had a little too much Kool-Aid. When you push practice into philosophy and eventually into religion, your inertia can get downright indomitable.

A case in point from their flagship, Basecamp

On the To-Do dashboard, each member across projects is listed in a drop-down, so you can see outstanding items assigned. Then there is an entry for “Anyone”. But it really means “No One” – items able to be picked up, but that no one has. There is no way to see “Everyone” (all undone items), so I asked them for it directly.

The response could have come from Cupertino. It’s good for you like this, eat it and be happy. This was the one feature I really needed, and they wouldn’t consider it. It was enough to kill the usefulness and curb my enthusiasm, to be sure. I could adopt their version of minimalism or move on with mine intact.

So, in response to Matt’s post… Vigilance against miscellany is critical, but it can’t become a fortress. I think we do well to say “no” a couple of times while thinking “maybe”. If it keeps coming up, “maybe” turns to “probably”, and the scale tips. YAGNI is great until someone DOES need it.

That progression depends on being the “ruthless killer” without becoming a zealot or a lunatic. We might know better than the user most of the time, but we should never be so arrogant as to assume it always and declare it so without consideration.

In the complex realm where Sakai lives, we should always be careful when giving people “a better way”. It’s possible to deliver obvious clarity about as easily as frustration and obfuscation in something so dear as the ability to educate and learn, all to uphold the “simple and better” philosophy. -NB

“These are the data.”

March 29, 2008

Those of you who attended the Sakai conference in Atlanta might recognize that quote. It’s attributable to Eben Moglen (SFLC), uttered during the “lunch discussion” with Matthew Small (Blackboard). “These are the data,” is a quote I’ve used countless times in the past 16 months. I’ll explain in a moment.

So, fast-forward, and we see posts from Michael Feldstein on the initial invalidation of all 44 claims and Blackboard’s response. The latter post (and Bb’s statement) is specifically about the percentages of patents that are upheld, invalidated, or altered under reexamination. This is the exact context of Dr. Moglen’s original quote. He presented some hard figures and summed up with those ringing words.

I’m not going to beat up on Blackboard – they’re looking at the rules and playing the game. I completely disagree with software patents, but they’re still allowed in the rulebook, so I can’t blame them for filing before someone else did. Indeed, the applications were filed in 1999 and 2000. It was a different frontier with respect to the Internet, Free Software, and software patents then (see LZW, Unisys, SCO).

Personally, I thought it was pretty bad karma to file a press release of a patent and lawsuit on the same day, turning the red lights in people’s minds into white hot light on the detailed claims. But then again, I’m not on their strategy team. It’s just too bad that we’re tying up all the energy, time, and tax dollars, bickering over how we play in the same edusandbox and who gets how big a slice.

Anyway, there are 44 patent claims. Of the 10 issues set forth in the reexamination request, the rejections of 6 were adopted with modification, those of one without modification, and those of 3 were not adopted. The union of all adopted rejections deems that all of the claims set forth in United States Patent 6,988,138 are unpatentable. These are the data. -NB

You can download the audio of the lunch session from the Sakai Confluence page. There is also a transcript from Jim Farmer.

Dojo Storage — A timely treat

March 17, 2008

Every so often, something goes your way, ya know? So, last Thursday, I posted to sakai-dev asking whether I should use dojox.storage or borrow some stuff from rWiki:

http://www.nabble.com/Flash-storage—-dojo.storage-or-homebrew–td16033853.html

It also just so happens that the main guy behind dojox.storage, Brad Neuberg, apparently felt like being a kind soul, did a ton of refactoring work, and bundled up the smallest, most practical package possible for my immediate need and posted it, not three hours later:

http://codinginparadise.org/weblog/2008/03/easy-download-of-dojo-storage-for.html

So far, it’s really easy to use (about 15 total lines, and tastes like HashMap) and moves between browsers very well. I’m only having one issue on IE. It’s choking on line 52 of some unnamed file that a little Googling hints is related to the Flash plugin. (For the curious, it’s an Object Required error in flashremoveCallback()…)

I guess this can happen with different ways of including SWF files – the confusing bit is that one thread says use the HTML embedding, rather than JavaScript, and another says to use JavaScript rather than HTML. The tricky part is that both seemed to fix it.

So, I’m going to test on a machine that doesn’t have script debugging turned on and hope for the best. If it doesn’t grumble too badly, I’ll poke around for a fix in my spare time.

Either way – Cheers, Brad! You really made my day. -NB

Personal artifacts vs. official purposes

March 9, 2008

How long should personal artifacts submitted in some official capacity be viewable (by learner or official) in their original state? How we can let learners really own their materials?

We’ve approached this problem to date by ignoring it. We make an explicit step: at some point, we force to student to say ”I’m done with this thing and it’s yours forever”, and lock it. Or we just let it be malleable forever.

This is directly analogous to how, in logic, math, and computer science, we sometimes restrict input to maintain a lower conceptual threshold. Persistence and security of artifacts is a really hard problem, so we make sure it can’t destabilize the easier ones we’re solving, by distilling a complex grey gradient into blacks and whites.

It’s a perfectly valid way to get a foundation, but it’s time to move into our version of second-order logic – versioned, purpose-stamped, multi-threaded artifacts – to address our complex realities. We can’t lie to ourselves and believe that hard-locking a student’s file forever is practical in a personal learning experience.

The specific thought that triggered this post was that the notion of estimated secure lifetime in cryptography is relevant to visible/storage lifetime in ePortfolios. However, this post also includes thoughts on the concepts of fluid artifacts being copied and made concrete at submission time, and how this relates to our existing web/email usage patterns.

Lots more below the break.

Useful Lifetime

In cryptography, there is a notion that the encryption should be strong enough that the estimated time to break it is longer than the sensitivity of the information. There is a general acceptance that, if it’s possible to decrypt the message legitimately, then someone could do it illegitimately.

We just try to make sure that it would take a suitably long time (a few years to steal a single credit card number from an intercepted e-commerce transaction, for example). Security of physical safes is similarly rated on estimated and actual time to crack by an expert.

It seems to me that we should consider the “useful lifetime” of submitted versions of user data in Sakai and ePortfolios. It’s kind of the inverse, where it’s not the notion of how long it’ll take the bad guys to get in, but the notion of how long the good guys can get in, and for what purposes, before the thing fades away.

I don’t think this will lead a simple answer, but accepting that the valuable lifetime isn’t “forever” seems to be a start. Automatic, timestamped copies with a defined lifetime for a specific purpose is a tractable scenario. And it gives institutions a cutpoint to say when data can be archived and/or deleted from their live systems. Defining when the stuff goes from mutable to snapshotted – with a sensible UI – is the tricky part.

Metaphors Already In Use

There is one application people employ every day that doesn’t seem to get much attention for its parallels to this problem: email.

Lacking sophisticated, intuitive software to collaborate on living information, people unconsciously send emails containing the two critically differentiable components: links and attachments.

  1. Links refer to some live resource that could change between the time the email went out and when it’s consumed. Unless the recipient manually archives the resource, they have no reasonable expectation that it will remain unchanged. They might be annoyed if a link goes dead or an article is deleted, but the effect of visiting a link is understood in the context of time-sensitivity. (Just grab a magazine from 2000, try the URLs you find and see what happens…)
  2. Attachments are concrete manifestations of things that were alive until “Send” was pressed. If the recipient is on vacation or sabbatical for two months, the resource waits, patiently, unchanged in the inbox. Emails with attachments are, themselves, unaffected by the passage of time. (Look at a JC Penny catalog from 1980; all of the information is outdated, but intact.)

As soon as we move away from this familiar, learned metaphor of email, all sanity breaks down. In a learning system or ePortfolio, we get frustrated when resources are locked; we get frustrated when they change. We can’t seem to make any sense of the balance between personal rights and abilities, and the official or accountability needs.

For the system / builders, accountability wins, because accountability pays the bills. See my post on this within Sakai.

For the users, it continues to be a painful problem and they end up saying things like ”I’ll just make a document and email it. At least that’s simple and I know what I’ll get.”

We never realize that the users are telling us what they need – we just need to implement it in the system in a way that isn’t onerous.

In reality, I think the users need to give an inch and the system needs to give a mile. The digital world is moving toward versioning, so there is some user adaptation needed. But we can’t force-feed a “better way”.

At least ten or twenty aspects of learning/portfolio systems depend on getting this right. And right means easy.

The only thing we should add to the user’s world is the ability to make a tag (in the source control parlance) in some specific spot or make a live link. I think switching a connection between the live version and a tagged version is actually the critical user component, but it must be trivially simple.

I suspect “locking” workflow steps could be implemented as automatic tagging. Those autotags would be identical save for lasting for a specific period, being visible to others based on appropriate rules, and not convertible/detachable.

I’m naively hopeful that the mainstreaming of versioning (in Google Docs, “Track Changes”, wikis, source control, document management systems, JCR, etc.) will give us all some internal understanding of how digital information exists infinitely – as opposed to the discrete physical counterparts – and how to manage it. Don’t laugh just yet – you don’t think about your understanding of emailing links vs. attachments; you just do it.

One very interesting thing I’ve noticed in Google Docs: there are automatic drafts that happen while authoring, and you can switch between them at leisure until you save. Then the interim, uninteresting junk fades from view – a sophisticated technique to maintain sanity in the user’s world.

I’m not sure why the topics of cryptography and email haven’t come up in this conversation. I just hope that considering what people use and do implicitly every day gives us some traction on a practical set of solution strategies. -NB

Sakai and OSP — development, accountability, personalization

March 9, 2008

Sakai/OSP development is expensive, so it’s hard to find funding prioritized for the “soft” benefits of personal ownership and development that might not jive with the “hard” outcomes understood as necessary for accountability.

I believe these “soft” outcomes can communicate accountability just as well, and even better, but it’s a much more intensive project that needs smart, creative, dedicated people over time. The consumable accountability data needs to be assembled as a secondary product, as opposed to the primary, “automatic” data products of rigid accountability measures. (…which feel a bit like TPS reports to student, faculty, and implementors, in my experience.)

This is one area where the “lower bar” open source systems have an apparent upper hand in development. Good portions of the development happens by motivated folks who have some time to give to personalization. It’s easy enough to “get in” that essentially unfunded effort bolsters those aspects.

I believe that, in today’s higher education climate, any system that doesn’t address accountability in a systematic fashion will fall out of favor very quickly at any institution where the words “ePortfolio” and “assessment” have been uttered in the same sentence. It’s really hard to build a realistic accountability system in your spare time, out of the context of a real accountability project.

This is where Sakai/OSP is uniquely positioned. We’re admittedly a bit behind on personalization, but that is changing quickly. At the same time, we have a depth of reach into the “regular” learning system activities and assessment capacities that’s unparalleled. It’s also a primary goal for a number of us to make raw development much easier. I really believe the 2.6 release has the potential to change the game. -NB

Eclipse Ganymede (3.4) - Still no Visual Editor?

March 9, 2008

About a year ago, I was anxious that an upgrade from the Visual Editor Project might be included in the Eclipse Europa (3.3) release…

I’m a big fan of the portability, speed, and general native feel of SWT, and I’m generally a fan of the Eclipse setup. I’ve also done a good bit of C# stuff for hacking out GUIs for one-off apps. It’s actually easier than dealing with console scripts and you can hand off an .exe to let someone accomplish something without a proper shell. Given that working with Sakai is pretty Java-intensive, and that I was bouncing between OSX/XP, I figured these lightweight admin-style GUIs could be hacked together in SWT… Boy, was I wrong.

I admire the complexity of a GUI builder that generates all kinds of layout code, etc. It’s definitely not something I want to take on as a project right now. But, I’m pretty frustrated with the VEP. It missed the Europa bundle – no big deal, I thought – “there’ll be a package soon enough; this is important stuff”. Along comes October (three full months later), and this is posted on the main page:

Visual Editor + Europa == ouch

To my knowledge, this is the only official status update from the VEP since. While scraping the blogs and forums, I found one guy who seems to be maintaining unofficial builds, but I don’t see any active development at all. If you’re dangerous, you can check out http://www.ehecht.com/eclipse_ve/ve.html – he even has a Ganymede M3-compatible bundle – I’m too scared disinterested busy to spend the time.

And now that NetBeans has released 6.0 with cleaned up usability and the actually-funded Matisse, I’m really starting to wonder if I should give it another look. Another really cool feature I saw demoed was live model diagrams with full round-tripping right in the IDE. And they seem to have a BPEL designer integrated, which tickles my SOA side.

I’m not giving up on Eclipse, but I treat free software kind of like free agency: I use and probably contribute to what works for me. The adoption/switching cost is based on learning/porting time and, for an IDE, I figure that to be about two weekends of hacking to get productive. If I can crank up and save a few hours at a crack by being able to whip up a GUI and evolve a new data model with autodiagrams, it’s pretty tempting. -NB

«« Newer Entries Older Entries »»

About

...