[go: up one dir, main page]

Singpolyma

Archive for 2008

Archive for 2008

Describing Actionstream Sources

Posted on

I mantain wp-diso-actionstream, a plugin heavily inspired by MT Action Streams.

Early on, one of the things that made these two plugins so cool is that they share a config file format.  It’s YAML, which I’m not a fan of, but there was one big advantage to using it when I started: I was guaranteed someone would be interoperable with me, because I was using their format.

I’ve changed that YAML file quite a bit, but I’ve not added many “extensions” because I want MT Action Streams, at any moment, to be able to take the extra sources I’ve described and add them in.

There are essentially two parts to describing any source.

profile_services:

Yes, that’s the highest-level YAML heading for the first part.  Underneath is a list of simple hashes, described like so:
This is relatively straightforward. The source_identifier is a unique string (it must match what is used in the next part) identifying the source.  The name is a human-readable label for the source.  The url is a template for the profile URL, with %s where the user identifier goes. At this point you really have all you need.  With a service identifier and user identifier you can contruct a URL, and vice-versa.  There are three optional parameters that help make the UI nicer looking.   ident_label provides the human-readable string users would be used to seeing associated with this particular identifier (ie, Username, Screenname, etc).   ident_example is an example identifier (I usually use my own username on the service in question).  ident_suffix is any text a user might be used to seeing come after the username (such as .myopenid.com). This first part is relatively uncontroversial.  There are rarely multiple ways to map a user and service to a profile URL.  Ideally, I would love to see services hosting this YAML fragment somewhere and making it discoverable in some standard way (whether <link> tag or YADIS).

action_streams:

This section varies a bit more, since not everyone will agree on the right place to get data from, what data needs to be parsed, or how it should be output in an actionstream. Here is an example description:

The high-level identifier (here, twitter:) must match the source_identifier from the previous section.  Because one profile can have multiple sources of actions associated with it (for example, tweets and favourites), there is another lever of nesting where you give the identifier for the content type.  This is currently pretty arbitrary, but I’d like to see it move towards being the “standard” activity verb.

The name is the human readable label for the kind of content in this stream.  Description is more optional human-reabable information about the data.

html_form is any XHTML, with [_1] being replaced by the action owner, [_2], etc, being replaced with the n+1st field from html_params.

url is the URL to get the stream content from.  If not present it is assumed to be the profile URL. RSS/ATOM feeds can be detected on this endpoint if the content to be parsed is such a feed.  {{ident}} here is the same as %s in profile_services. I would like to deprecate this and switch to %s everywhere in a future version.

The final parameter describes how the data is to be parsed.  You may use atom:, rss2:, or xpath:.  RSS/ATOM feeds automatically have fields for their most popular elements (title, created_at). Any extra fields you wish are given a name and an XPath expression to use in parsing.  A more complete discussion of these field names, ones currently being used, and how I would like to see this progress can be found in this pastebin.

Moving Ahead

If the above two YAML snippets were made available by most sites with actionstream content, then the plugin could easily provide a nice set of defaults that kept themselves in sync with site evolution.  Users could override anything locally, of course.

There is one thing that neither of these describe, however: private items.  While a feed could be protected by OAuth, and the plugin could then authorize to pull in the data, this seems like going about it the wrong way around.  I’d really like to see some way of telling a site: “push my activity over there” and having it discover and hit a callback with the activity data.

Microblogging: The Open Wall

Posted on

I first experienced the beginnings of the “social web” in highscool.  My friends all had Xanga sites, which were basically blogs about nothing.  One practise of theirs, which annoyed me and seemed not to be present on the rest of the blogs I found, was that they abused comments horribly.  Comments were never about the content of the post.  Rather, to contact someone, you would comment on their most recent post.  To reply to someone’s comment, you would comment on the most recent post on their site.

This is exactly how Myspace profile comments and the Facebook “wall” are intended to work.  Facebook even built the “wall-to-wall” feature to show conversations back and forth across this odd system.

Now think of microblogging. Think of how you use it. Yes, there’s a publication aspect to it for sure (I say what I want people to hear).  There is also, however, this element of public conversation people seem so interested in.  Back-and-forth between two or more people, on their own pages, archived publicly.

What’s even better about this realization? I hated the Xanga comments, I hate the Facebook wall (and their new “comment on status” feature), but I love @replies.  So it wasn’t the concept of public conversations I wasn’t getting, but merely an implementation detail.  @replies are piped through a good notification system (which for Twitter these days involved scraping a feed and re-posting it to a fake identi.ca account so that I can get them via IM) so that they can be near-real-time when I have time, and are still there for me if I don’t.

Permissions 0.01

Posted on

Over the course of working with Will, it was decided that the ability to show certain data only to certain viewers, based on permission settings, was more general than just profiles.  All permission logic and UI had previously lived in the diso-profile plugin.  Other plugins, however, such as actionstream, supported using the functionality if it was there.  It was reasoned that someone might want to use permissions on their actionstream (or anywhere else! just wrap the output up in an if statement and call diso_user_is(‘relationship’) ) without having the profile plugin installed.  Will nicely extracted the functionality into a separate plugin which I have now packaged for download at the DiSo code site.

Download the plugin

Code Archive

Posted on

I have put a bunch of my deprecated plugins and other odds and ends of code that I’m no longer working on up at /archive.  Some of it is marked with a license.  For any that’s not: just ask!

I also have similar odds and ends, some of which I still work on, up at github.

XRDS-Simple Plugin Update

Posted on

Just a note that my XRDS-Simple plugin has undergone some major refactoring as part of the DiSo project. It now lives at WordPress Extend.

DiSo Profile 0.50 Release

Posted on

Hot on the heels of the Actionstream 0.50 release comes the 0.50 release of DiSo Profile! It can be downloaded from the usual place.

Note: The permissions logic has been spun out into the new permissions pluginMake sure your install and activate that plugin first if you have any private data, or it will be displayed to the public.

Changes

  • Changed author to DiSo Development Team to show contributions by Steve and Will
  • Integrated with new WordPress admin theme (about time we released that!)
  • Permissions logic spun out into permissions plugin
  • Sidebar widget
  • Fix for hCard import button (now link)
  • Nicer URL display
  • Nicer profile preview

Actionstream 0.50 Released

Posted on

I have just pushed wp-diso-actionstream version 0.50 out. You can download it from the usual place.

Changes

  • Changed byline to “DiSo Development Team” to reflect all the contributions by Will Norris (more specific contributors in LICENSE file)
  • actionstream_services filter as the way developers add custom service definitions
  • Better YouTube support
  • Support for many new services: userscripts.org, brightkite, getsatisfaction, backtype, github, and twitter favourites
  • Sidebar widgets (actionstream and services list)
  • Template tag for services list
  • Nicer RSS feed URLs
  • Some major refactoring
  • Integration with new permissions plugin
  • Intelligence in service display when wp-diso-profile is installed

On Universities

Posted on

Saving this phrasing before I forget it:

Some people are in university for accreditation, some for education.  The university tries to provide both, and as a result is suboptimal at both.  They are not complementary goals (and may even be contradictory in some cases). And, as Unix Philosophy, Google, and life teaches us: doing too much means doing nothing well.

Severed Fifth Denied by Reign

Posted on

Jono Bacon‘s Severed Fifth project released its debut album today.  I torrented it first chance I got and listened to the vorbis files on my stereo from my media centre.  I must say that Jono has delivered, as promised, a pounding metal album that instantly takes it’s place at the top of the heap in free as in freedom death metal.  The album is well on it’s way to being one of my favourites.

Not content just to thrash and scream for the length of an LP, Jono mixes it up with softer vocals in some songs, singing over growls, a hard guitar solo in The Lake, and even a more bluesy solo in another track.  A few tracks are also interjected with monologue snippets.

One song pauses near the beginning in that just-out-of-sync way that surprised my brother and I a lot when we first heard it, but is really enjoyable.  The guitars and vocals deliver that moshable experiance that makes metal concerts so much fun.

The album is also released under a truly free license: CC-BY-SA.  While I’m no fan of the ShareAlike clause, Jono’s willingness to step outside of the NC-SA/NC-ND non-free regime that too-often dominates “free” music is really refreshing.  I’m looking forward to the mixes, mashes, and hopefully videos that fans put out.

No review can be all-positive.  As great as this album is musically, there are a few small things that I would have appeciated.  For one, it may just be me, but the album seems to have less bass than I like.  My subwoofer barely moved.  This may be the vorbis compression or the mix, and it may even have been on purpose.  I’m still willing to chalk it up to my soundcard, but I do play other music and I get bass tones a lot better on much of my stuff.  I would also have appreciated lyrics with the release: I know Jono’s a busy guy and I appreciate that not everything can be done before release when you have a firm date.  I’m just saynig that I really like to do a second listen while reading along, and that just isn’t possible yet.

I was also personally not a huge fan of a few of the very pronounced “moth-er-fuck-er” uses in two of the songs, but that’s much more a matter of personal preference.  The lyrics are not so laced with profanity as to be outright offensive like some other artists may be.

Overall, a very good album.  We’ll see where the project goes next.

Why I Support Free Culture

Posted on

The free culture movement is a social movement that promotes the freedom to distribute and modify creative works, using the Internet as well as other media. Wikipedia

There are a number of things that get associated with the term “Free Culture” and a number of reasons people support them.  Let me start with what I do not support:

  • I do not support the rampant piracy of music, or the triumph over the RIAA through possible loopholes.
  • While current copyright laws and enforcement practices are counterproductive and unfair, I see this an a separate issue to Free Culture.
  • I do not support Free Culture just because I believe in Freedom (although I do).
  • I do not support “mix culture” that thrives on living just as close as they can to the Fair Dealings (/ Fair Use) lines just because they want to use the content without paying.

If these things, to me, are not Free Culture, then what is?

First, it’s been beaten to death but I must say it: libre is not gratis.  When I talk about Free Culture, I’m not talking about not paying for things.  A lot of Free Culture is available gratis, but also some is not: and I have been willing to pay / donate to even those that are available at no cost.

I support free culture because a harmonic culture is a strong culture. Let me expand on that.  Harmonics are those things which reinforce each other.  Musical melodies can be harmonic, and that is the most common context for the term.  A culture in which  The Backstreet Boys sing I Want it That Way is alright. Artists can create original works and distribute them. But a culture in which “Weird Al” Yankovic can then sing eBay reinforces itself.  Culture builds on culture.

Nothing new here, and many would point to the infringing mix culturists and say that’s what they’re trying to do.  But by mixing locked culture, often illigally they hurt the cause and their art form.  I support Free Culture not because I want to see more mixes, but because I want to see more things that can be mixed.  To me, that is free, no-strings-attached permission to build on your work.  If you make a song, I make a video.  You make a cartoon, I include it in a documentary.  It’s not the building on that is important, though, but having things to build on at all.

Some Free Culturists want to acheive this goal by making more lax copyright laws.  This is a fine goal, but is ultimately the wrong solution.  While having more Fair Dealings allowances and content entering the Public Domain faster gives us greater access to our culture – even more can be done by licensing works freely now.

The great benefit to this model is it helps artists who are creating work right now, not only to have a rich community to draw from, but also to market themselves at all.  In a traditional copyright model, everything hinges on expensive licenses, equiptment, and lawyers protecting it all.  If you open yourself up to unrelenting remixing, and business models that cut out the middle men (and this applies well outside of music) you can interact with the fans/consumers more directly and make as much or more money doing it.  All without selling your rights or giving someone else a chance to meddle in what you do best: being the artist.

The One True Format: Technological Snobbery

Posted on

There’s an odd phenomenon that occurs as one transitions from an outsider writing code to someone who actively contributes to a community.  The more you contribute to mailing lists and blog discussions, the more you realise it.  You have an opinion.

You never meant to have an opinion, you just meant to write code.  Let brighter minds decide how it all works and just build the solution.  Code, not specs, not politics.  Re-use what’s out there in new and interesting ways.  Yet this, in and of itself, is an opinion.  The more you contribute, the more you realise that you are no longer just asking that things be made easier for implementors or answering questions about past decisions: you are advocating solutions.

This has happened to me more than once as I have transitioned from community to community.  The first was when I began a project to write my own feed reader (BoxtheWeb) and simultaneously became involved in the Blogger Hacks community.  I slowly went from a hacker who thought feeds were cool and wanted to build stuff with them, to an advocate of the RSS2.0 format.  Somewhere in my coding I decided that format was the easiest to use and the best suited for what I wanted, and I began to advocate.

Next was JSONP.  One of the few things I have advocated that gained much headway fast (through nothing I did, I’m sure, but still exciting to see).  Yahoo, coComment, del.icio.us, and others all jumed on the JSONP bandwagon and I was happy.

Other formats got either on my “good side” (OpenID, OAuth, POSH, Microformats) and my “bad side” (ATOM, EAUT, PortableContacts) for one reason or another.

Ridiculous.  Sure, solutions should be chosen based on technical merit, but who gave me (or anyone) the right to decide which technologies have merit?  It’s time to get back to basics.

If it works.  That’s the key.  Working code.  RSS, ATOM, ActiveChannel, hAtom, or a list of URLs in a text file… really, I don’t care.  As long as the data is there and I can read it, I can write code.  Who cares if EAUT takes off or if http://me@you.tld/ remains valid?  If people can log in: we win!  Not only is there not One True Format, there is no long-term difference between formats.  Sure, there may be reasons to choose one over another, but ultimately it’s just data.  What users (or, even better, developers) can do with that data is what’s important.  These days, we’re better at working around the deficiencies of services (*cough*twitter) than building ones that do what we want anyway.

Groups on the Open Web

Posted on

Groups seems to be a very popular concept on the social web.  Facebook, Myspace, Orkut, last.fm, Ma.gnolia, FriendFeed (rooms) : everyone has groups.  How do we think about these groups in the context of tearing down walled gardens?  Do we think of places like Ning that replicate all this functionality in a more open ecosystem?  Or do we push further into a more decentralized way of thinking and collaborating?  Try the following links out:

What do you think?  Besides being a bit rough (some unrelated data sneaks in), this seems like a very good snapshot of what is going on in and around DiSo : better, perhaps, that any of the “official” sources.

I would maintain that on the Open Web we can see two different kinds of groups: ad-hoc and gardens.  Both could be maintained by the same software (which I would love to build, but will not be upset if the lazyweb beats me to it!)  Ad-hoc groups are the simplest: let a user choose one or more defining keywords and then display content from all over the social web that fits that tag (with options to filter by blog, microupdate, bookmark, event, etc).  Done.  A group is born that you can track and reply to and interact with (with appropriate links back to the original service, of course, no extra comments layer like we see in FriendFeed if we can help it).

Gardened groups would be a step more formal, and would be the open variant of existing walled-garden groups.  Group administrators (“gardeners”) could choose a group name/shortname and keywords.  They could then choose to have the group not follow certain services (for example, if no photos would be relevant, not track Flickr) and could also add other relevant feeds/respose links (ie mailing list RSS feed with mailto: links for the “reply” function, code repository commit feeds, etc) and links to relevant pages that are static content (wikis).  Content coming in from all sources could be pruned to hide content that matches the keyword(s) but is not relevant.

Feeds and OPML files should be provided to go along with groups, interaction links should make it into the footers of feed item bodies.

PGP UI Suggestions

Posted on

Lets face it: currently, PGP is hard.  Most geeks even consider it “geeks only”.  While few average users can benefit from encryption (few people say things that secret) – everyone can benefit from signed authenticity (at very least to cut down on spoofing).  The biggest obstacles to end users are (a) they don’t see the point (b) they freak out when they see “weird” inline content or attachments (c) verifying long hexadecimal signatures is hard.  I will make suggestions about these is order.

The fact that users don’t see the point really is the biggest problem.  If more users cared about authenticity, more would be willing to endure the pain of doing things “right”.  My hope here is that if seamless enough solutions become common enough, some people will use it because it is “right there” and as more people they know are sending signed messages perhaps some network effect can be leveraged.

Weird content is on it’s way to being fixed.  If everyone installs FireGPG and uses a mail client (/webmail supported by FireGPG) that supports PGP (a growing number) then at the very least, the noise gets hidden behind a “this message is signed” notice.

Few people want to read off long hex number to each other in person.  Here’s where it gets touchy, because anything we change here changes the security of the transaction.  I’m ok with that.  I’d rather my non-geek friends have a somewhat-trusted key than an untrusted key or no key at all.  My geek friends and I will still verify each others’ fingerprints.

Alice receives an email from Bob, with whom she has never previously shared cryptographic information.  Neither Alice nor Bob is a geek, tech savvy, or familiar with cryptography.  Alice knows her email program has a new feature that lets people verify each others’ messages and decides to try it out.

Alice elects to share her PGP key with Bob.

Alice has never shared her key with anyone before (she doesn’t have one).  She is told this and asked to wait while “some setup occurs”.  The key is generated and UI moves to the next step.  Somewhere in here there should be a notice to backup the key, “since if you lose it you can no longer send verified messages”. Public keys should be sent to a public key server automatically.

Alice secures her key to Bob.

Alice now picks a secure question and answer to prove to Bob (within a reasonable, but not cryptographically rigorous) measure of certainty.  An email is sent to Bob’s address with the output of `gpg -a –openpgp –export KEYID | gpg -ac –openpgp -` attached.  Also attached, it sends an unencrypted export of the public key, for use (moot in this case, on a new key) if this key has been signed by others Bob knows.  That is, Alice’s public key is symmetrically encrypted with an algoritm allowed by the OpenPGP standard (currently 3DES) with the passphrase as the answer to the secure question.  I’ve marked it case sensitive, but all UIs COULD downcase passphrases to simplify this.  The secure question becomes the body of the email and the subject can be something like “Alice is sharing her verification key with you!”

Bob recieves the email, an his client flags it (with an icon or similar) as containing verification information.  Some clients may find it makes more sense to process the message immidiately upon receipt, instead of just flagging it.

Bob opens (or his client auto-opens) the message.  Instead of being presented with an email full of gook, he is presented with a window by his client.

Bob decrypts the key.

Bob enters the answer and is presented with a window describing the key.  This window should say “Alice is claiming…” or similar and display the image in the key (if there is one) and all UIDs/comments.  There should then be a list of how well Bob knows this key:

It claims to be alice@example.com and was sent from there: very low

It was found to be the same as one available on public key servers: very low

It was verified using a secret question: medium

It has not been verified by anyone you know [aka, key signatures, high]

[Button: Advanced Verification, showing the key fingerprint – for advanced users]

[Button: trust this key]

[Button: I have talked to Alice and know this is her key (ultimate trust, signs key)]

If three or more signatures from people Bob trusts are on the key (remember, the unencrypted one) the client may skip to this step and provide a “verify using secret question” button.

Opening this message in the future should sync with keyservers, and then show the last dialog again, showing any new signatures from people Bob trusts, and allowing him to verify/sign it.

Give Raw Pages a Lift

Posted on

We’ve all seen it: the harsh white background, serif fonts, and window-border hugging indicitive of unstyle (or very lightly styled) (X)HTML.  Today I discovered something: it takes very little CSS to take a basic HTML page and give it some flavour (and make it less painful on the eyes).  Just eight lines of CSS.

Not all pages have such CSS, however, and sometimes it might be nice to just hit a button and get some style.  So I created a bookmarklet.  Drag it to your bookmark bar, and next time you see an unstyled page, hit it.  Basic styles will be added instantly.

SocialSearchMe.com API

Posted on

First off, my social network search engine now has a domain thanks to Tantek!  It’s just a redirect forwarder for now, but much easier to remember!

I have been polishing some bits of the search engine and am pleased to report that it now has a complete API!

First off, the microformat API.  All data is marked up with hCard, this allowing pages on the engine to double as API output.  This is the preferred method.  If you really must, a JSON(P) variant is available.

These are the endpoints for a standard search:

http://scrape.singpolyma.net/profile/?q=NAME

http://scrape.singpolyma.net/profile/search.js.php?q=NAME

These are the endpoints to search from the “point of view” of a particular person (specify a URL):

http://scrape.singpolyma.net/profile/?q=NAME&pov=URL

http://scrape.singpolyma.net/profile/search.js.php?q=NAME&pov=URL

To retrieve data about a specific user use:

http://scrape.singpolyma.net/profile/person.php?id=ID

http://scrape.singpolyma.net/profile/person.js.php?id=ID

http://scrape.singpolyma.net/profile/person.php?url=URL

http://scrape.singpolyma.net/profile/person.js.php?url=URL

And that’s it!  This stuff powers my contacts page and the bookmarklet.

DiSo Gets Search

Posted on

Tantek Çelik has purchased socialsearchme.com for this service! Thanks!

Never tweet about something you don’t want to go public.  I’ve been annoying my followers for some time now about my new social search engine.  Tantek then linked to it from his WordCamp SanFrancisco presentation.  Not that I’m upset at all.  I’m ecstatic that he thought it was worth linking to!  Still, a word to the cautious 😉

So how does this search engine work? What does it do? Basically, it’s an hCard search engine.  Unlike the Yahoo or Technorati Kitchen implementations, however, this search is focused on social networking and profiles.  If DiSo were Facebook, this could be the friend search functionality.  So instead of having the results be links to pages that contain matching hCards, the results are profiles with social networking data (including contacts) and names, etc.

One other key thing that is different here from pure hCard search is that I am only spidering representative hCards (with some small hacks for well-known sites like Twitter).  This means I don’t spider arbitrary hCard data, instead I am only indexing profile pages.  I use both XFN parsing and the SGAPI to verify claims that two pages represent the same person, and then associate them.  Data from both pages goes into the index as if it were all on one page.  Only one page needs an hCard, since connections are made through rel=me and XFN.  This way, although my profile is on my main page and my contacts are at singpolyma.net/contacts, the search engine indexes them both.

To find new pages to index, I spider along XFN (and FOAF, since I also ask the SGAPI) to find pages likely to have the sort of data I’m looking for.  Interestingly enough, this means that social networks like Twitter, Pownce, and Digg, who support hCard and XFN, get almost completely indexed.  There are over 100000 profiles in the index now, and I have only given it one manually : singpolyma.net.

I’m not entirely sure how the data will be useful yet, but I’m really excited about the possibilities.  I firmly believe in making XFN lists, static though they may be, come alive with potential through layers of functionality, be in through plugins, 3rd party services, or bookmarklets.

Speaking of bookmarklets, I have one.  Go to that page, add the bookmarklet, and visit my contacts page (or any other page with lots of XFN data).  Click it and watch that boring list of links and names turn into a more functional social-networking list.

The code has been released under an MIT-style license on my repository.  Front-end is PHP, back-end is Ruby.

DiSo : on our way to fixing your addressbook 😉

Messaging: What I Want

Posted on

I’ve blogged numerous times about XMPP, SMTP, and communications evolution on the web.  I’ve suggested what I want ultimately and snippets of how we might get there.  Here, I am going to outline just briefly what I consider “next steps”.  The big ones.  Get these done, and you will have made a *huge* stride in online messaging:

  1. Allow offline messages (type normal or chat) to be collected as “email”.  Gmail sort-of does this by presenting unseen offline messages in the web interface inbox.  I want IMAP access to these in the inbox and their archive.  Heck, store them in a Unix mailspool (have to store them somewhere anyway) and existing IMAP servers will just work for you!
  2. SMTP messages are type=normal.  If you store offline messages in a mailspool and run an SMTP server on that spool, you’re mostly done.  Might be good to offer real-time deliver of those messages to the user of XMPP as well though.

That’s it! Sure, more can be done, but if you get the first one done I will be your biggest fan.  Do both and you’re well on your way to an evolution in how we deal with email (both from a user and a protocol perspective).  Yes, I’ve tried to build this.  I want to do it as an ejabberd module, but ejabberd is barely documented.  I’ll try again sometime if no one else does – maybe with ejabberd, maybe with someone else.

Boxbe AntiSPAM

Posted on

Today I received an email from Boxbe support telling me they had finally given users the option to turn off their “coutesy notification” system.  I couldn’t be happier!  I thought I’d take this post to share about my SPAM problems, and my solution.

The Problem

GMail SPAM filtering is nice.  I may not have it forever, and don’t like to count on it, but it works very well.  Unfortunately I made the choice when I registered this domain name to set up a catch-all.  At first that was fine, but after over a year *@singpolyma.net was receiving so much SPAM, so fast, that even the GMail SPAM filter couldn’t keep up.  I began to receive over 40 SPAM (sometimes over 200) per day, sometimes all at once!  I didn’t want to disable the catch-all though… that felt like the wrong solution.

The Right Solution

I decided the right solution was whitelisting.  Since most of the people I know don’t use PGP (yet) there is no way to guarentee the sender of the messages, but from a cursory glance over my SPAM box I decided that trusting the From: header would work for 99% of today’s SPAM.

I can’t set up a forwarder from a catch-all with Dreamhost, so I set it to be delivered into a mailbox.  I then created a “dummy” Gmail account to fetch this mail via POP3.  Bonus #1, Gmail filters all this mail as it comes in, catching a huge amount of the illigitimate messages (just not enough of them).  Set Gmail to forward all email to singpolyma@boxbe.com (more on that in a bit) and delete.  Using Gmail as an email pipe/filter really.

Then Boxbe.  Boxbe gives you a you@boxbe.com email address that you can forward mail to, it checks it against a whitelist, and sends it on if it matches.  Previously, if it did not match, they would reply with a “challenge” email.  This is annoying, broken, and sometimes embarassing, so I am very pleased that they have now given people the option I wanted all along.  Disable all “courtesy notifications” and turn on the report of the queue, daily.  If I receive any mail from people not on my whitelist, I get an email from Boxbe once a day summarizing who tried to contact me.  I go and let through any legitimate new people.  Perfect.

Boxbe uses the password anti-pattern (although they’re working on fixing that, they say) to import your address book.  They have a CSV importer though.  Export from Gmail, import to Boxbe.  Set up some trusted domains (like *@uwaterloo.ca) and go.

I haven’t seen SPAM since, and have only once or twice had to go over and let through a message that got stopped.

Sharing Links / Rich Messaging

Posted on

There a fair amount of buzz around messaging sysems, be it microblogging or direct messages.  There is also discussion about broadcast social media (share this with all your friends!).  One use case keeps cropping up for me: sharing content with individuals or ad hoc groups.  I will focus here on sharing links, but much of this applies to any media richer than one raw text blob.

If I want to keep a URL for later – I use bookmarks.  This was de facto for a long time.  Then, one day, someone decided it might be cool if not only they could read that page later, but everyone else could too!  Thus, the birth of social bookmarking.  Today, if I want to share a link with all my contacts I simply bookmark it on my Ma.gnolia, and if they care, they’ll see it.

Then, groups.  If I want to share a URL about copyright issues with the Waterloo Students for the Information Commons, we have a Ma.gnolia group.  Interested parties subscribe, and the stream is also syndicated to the main page of our wiki for general interest.  (Aside: if a discussion with the group is to take place around a link posted there it sometimes happens on our mailing list… I’ve recently begun experimenting with Friendfeed rooms for this.  While commenting on FF in generally seems dumb, in this case many of the shared links have no comments themselves and the commentary would only be interesting to other group members anyway.)

One extension of groups really : ad-hoc groups.  I don’t want to create a new group somewhere and invite everyone who might be interested every time a topic comes up breifly.  It needs to be easy (like, one step, no more than three short fields) and not require people to sign up for anything to contribute/subscribe.  Then it can die out later naturally.  Stronger (more organized) than hashtags, but less formal and permanent than groups.  This is analogous to the cc-everyone chains that develop because people are too lazy to make a small, temporary mailing list.

Alright, now to the big one: point-to-point.  While 1:1 communication is usually not the answer (and this has partially sparked my ideas about ad-hoc groups) – sometimes you just read a page and go “so-and-so would be interested in this”.  This has, in the past, caused me to email URLs to people.  This feels like the wrong solution.  Even Twitter dm doesn’t seem quite suited to this.  First I will describe my ultimate UX, then I will describe what seems to exist today.

I want a button in Firefox (or whatever browser I end up using in the future – Firefox for now) that opens a dialog allowing me to simultaneously save the link into my bookmarks (on Ma.gnolia or wherever), share with an arbitrary number of groups, and with an arbitrary number of contacts.  You can take a peek at my mockup if you like.  This is very different from how, say, Ma.gnolia or Pownce does link sharing.  Note that all of these (my bookmarks, some groups, some contacts) should be optional – I may not want to use all of them each time.  When people send me links this way I want an RSS feed of the links.  If they get emailed to me it is not much better than the original solution.  If they are delivered into some “private message” box we have YAI, and that’s worse.

Tie in to DiSo: wouldn’t it be extra neat if I could type not just, say, Ma.gnolia or Pownce usernames, but could type URLs?  System asks their provider how they prefer to recieve links and then sends it that way.  I really don’t want to make people sign up for whatever service I happen to use.

So what can we use today?  Well, there are a few options.

  1. Emaling/dming/@heyyouing URLs can work – but it’s not ideal for one key reason: there is no simple way to get a “list of recent links”.  I don’t want to go through every recent email or tweet to find a URL.  Some people prefer this because it facilitates discussion around the link somewhat.
  2. Pownce.  Using, say, http://pownce.com/send/link/?url='%20+%20encodeURIComponent(window.location.href)%20%20+%20'&note_body='%20+%20encodeURIComponent(document.title)%20));">a bookmarklet, one can add links to Pownce and send them to contacts or even “sets” (not-quite-ad-hoc-groups).  The key issues here are that if I also want to bookmark the link (I usually do) I must do that separately with a separate form and bookmarklet.  I must also re-post to Pownce for each contact/set I want to send it to.  There is also the issue that people would have to sign up for yet another social media account in order for me to share links with them – Pownce doesn’t have OpenID support just now.
  3. del.icio.us for: tags.  This is not too bad of a solution if all your contacts are on del.icio.us… and if you use it yourself.  I really need to get that Ma.gnolia-to-del.icio.us bridge project finished.
  4. Ma.gnolia groups.  This is a hack really, but it’s working for myself and a contact of mine.  We have set up Ma.gnolia groups whose sole purpose is for others to share links with us.  Anyone with an OpenID can just log in and start sharing links with us, which we then get from the groups’ RSS feed.  The problems here are: it’s a hack and sharing with more than one group at a time is still a pain.

Enough from me for now.  Think about it.

Blogger Recent Comments Source Released

Posted on

Older readers will remember my Blogger Recent Comments service that was hosted on Ning.  It has be obsolete for some time now (since Blogger now has provided their own comment feeds since v3) and has long since been shut down by Ning for being a horrible bandwidth hog.  I have thus decided to delete the app (since it’s just taking up space on my Ning account) and release the code under an MIT-style license on my devjavu repository.  Maybe someone will find it useful.

Now to go see if I can use Ning for my next project… Google App Engine not having a decent cron/fakecron is not at all useful… and my Python sucks.

Gmail CSV to mutt Aliases

Posted on

I’ve recently been moving from the slow, bulky AJAX of the Gmail interface to the nice, lean, familiar keybinding of mutt.  Below is a simple ruby script I wrote to convert the Gmail CSV of your contacts to useful mutt aluases:

#!/usr/bin/ruby
require 'iconv'

names = []

Iconv.iconv(‘UTF32//IGNORE’,’UTF-8′,File.open(ARGV[0]).read)[0].gsub(/\000/,”).sub(/^../,”).each(“\n”) do |line|
f = line.chomp.split(/”?,”?/)
f[1] = f[1].split(‘@’)[0].sub(‘%’,’@’) if f[1] =~ /%/
f[0] = f[1] if f[0].to_s == “”
next if f[0] == ‘Name’
puts “alias \”#{f[1]}\” \”#{f[0]}\” <#{f[1]}>”
next unless names.index(f[0]).nil?
puts “alias \”#{f[0].gsub(‘ ‘,”)}\” \”#{f[0]}\” <#{f[1]}>”
names << f[0]
end

Extending Microformats: a Return to XOXO

Posted on

I haven’t written about the XOXO microformat in some time, but some recent discussions caused me to dig into my archives to source a new post.  Microformats tend to follow the rule of only formalizing the most common of existing publishing patterns (the 80-20), meaning that some more “edge case” data cannot be represented.  Does this mean that this data is useless?  Not at all: but it is outside the realm of microformats, at least for now.  So we either need to invent something new, or extend what we have.

A Page from Recent History

This is not a new problem.  Every formalised standard is going to face those who feel that their bit of metadata should be included.  Take, as an example, the RSS 2.0 spec.  Core essentials of news feeds are present: title, description, date, etc.  Lots of metadata is missing though: author name, comment counts, comment feed URLs, ane more.  People solved this problem in two very different ways: some extended, and some invented something new.

Extending RSS (or any XML format) is easy: create a namespace, add your elements, publish.  If a particular piece of metadata is popular it gets standardised in a spec’ed extension (dc:creator, slash:comments, wfw:commentRss, etc).  The benefit of this approach is that all existing parsers can still read your content.  If a parser doesn’t need your extra metadata, it can safely ignore it and present just the core content.  No new code needs to be written, and no new formats need to be learned for 80% of the applications.

There was another group interested in solving this problem: the ATOM group.  They threw away all the existing formats (RSS 2.0 and RSS 1.0/RDF) and built something brand new from scratch to accomodate their data needs.  What was the result?  Feed aggregators everywhere had to write all-new code to handle this new, incompatible, and often more complicated case.  Time and effort was wasted both in code and user education (unlearn “What is RSS” learn “What is ATOM” / “What are feeds”).  Once the standard hit a spec’ed form, what happened?  People began to use namespaces in ATOM as well, because for all the “better” it was, for some edge cases it just wasn’t “better” enough.

Back to XOXO

It seems the key is to be easily extendable, not to think of everything up front.  If microformats are going to make their way into lots of APIs and not just be used for better page scraping (Ma.gnolia does a good job of this), then extensability is necessary.  Fortunately, XOXO provides an easy solution.  Check this out:

<ul>
<li class="vcard">
<dl>
<dt>fn</dt>
<dd class="fn">Martha</dd>
<dt>Anniversary</dt>
<dd>2005-02-04</dd>
</dl>
</li>
</ul>

An hCard parser can read that.  For a normal use case, no new code is needed.  An XOXO parser can read that, and if it knows about hCard will likely know what “fn” means.  The other data is there, though.  The parser has that data.  Minimal new code, and all the data can be used.  Cool or what?

Free Content Licences

Posted on

Fred Beneson:
As far as I understand it the GPL, (like most other licenses including CC, etc.) doesn’t require prosecution explicitly in the case of a violation of its terms, so much as it requires a cessation of distribution of the binaries or offending files.

Copyright law never requires prosecution.  It only allows for it.  Historically, GPL’ed projects have requested cessation of distribution or an opening of the source as out of court settlement.  The GPL cannot really require this, however, it can only specify the terms under which one can use the copyrighted work.  Thus, suing for damages is a right under law of the copyright holders on GPL’ed code when the GPL is violated, same as it is for copyright holders on All Rights Reserved works.  All other (more common) results are just projects being “nice” and handling out of court settlements.

IANAL. TINLA.

AtomPub + OAuth for WordPress!

Posted on

After getting the answers I needed from the OAuth list, I decided to go back to hacking at getting OAuth to play nice with AtomPub on my host.  I am pleased to report that it now works!  It requires a two-line patch to WordPress (for my host anyway, YMMV), and I had to change the wp-oauth plugin a bit (lastest in SVN), but I have successfully posted to my test blog using a remote AtomPub script authenticated using OAuth.

See some example code.  The future is bright!

“Arbitrary” Communications?

Posted on

You have probably realised by now that I’m very interested in forms of communication and the best ways to go about improving them.  What about communications from those you *do not* know?  I can get telephone calls, SMS messages, emails, and Twitter @replies (among other things) from people who have not been whitelisted (aren’t in my address book / on my friends list).  Is this useful? What forms of communication suit it best?  This poll started on Twitter, and I’m continuing it here and on PollDaddy.

Picoformats 0.20

Posted on

I have released an update to my Picoformats plugin. This update changes the logic so that posts are not modified in the database (thanks, @aditya!), but on the fly. It will also link to the local profile/archive of a user (thanks, @als!) that has no URL set in their profile. Also, if you use an @ reply from inside a comment and use the (one-word) name of a comment poster, it will recognize this (if they have whitespace in their name, just take it out when writing the @ reply). @ replies in comments also do not check Twitter usernames anymore, since this is expensive and breaks common use.

Download the plugin

Picoformats for WordPress

Posted on

This plugin is totally an experiment inspired in part by @techcrunch, and in part by how useful I have found some of this stuff to be on @Twitter, and also just to see the different ways one can use WordPress. (Update: this plugin seem to only work on WP2.5, so I’ve upgraded.)

If you haven’t guessed already, you soon will. Yes, I’ve implemented @ replies for WordPress. It looks in the local users (usernames and nicknames, like @singpolyma) first, then in the names and descriptions on blogroll links, the it checks if you are trying to use a URL (like @singpolyma.net) and, finally, if none of those yield a result it checks if the string is a valid Twitter username. It produces semantic markup for an @ reply and “person tag”:

<span class="reply vcard tag">@<a class="url fn" href="URL">NICKNAME</a></span>

Then, the plugin sends trackback pings to the URLs, to let the people know you’re talking about them. The plugin also implements trackback receiving on the WordPress main page so that users can receive these pings.

Not to stop so short, the plugin also implements #. What that does should be fairly obvious.

These features work in posts *and* comments.

Download the plugin

# # #

Actionstream 0.45

Posted on

I have updated wp-diso-actionstream to 0.45, changes include:

  • Fully tested WP2.5 support
  • Fixes for Last.fm support
  • Better microformats output

CopyCamp Summary

Posted on

Some of you may have seen my heightened Twitter activity for CopyCamp, an unconference for discussing issues concerning copyright. Government representatives, artists, and geeks alike spent a day and a half discussing licensing, business models, DRM (TCMs), and many more things. I took some notes, here are some summaries:

Production Tools

Related to how content we distribute should be licensed is the question of what tools / formats are being used in content production. People should be able to use any tools they want without being limited by what tools others / their audience / publishers use. The solution here is, of course, open formats – but open formats must be widespread to be usable. Users should not have to think about what they’re doing, it should transparently work.

Users often honestly don’t know there are options in tools – they just use what they’re given / trained on. While this should be possible to do (see above) education about options and competition is good. Some users are more comfortable going a roundabout way than using the better way / tool – this is fine, but when they encounter new mediums / techniques they should be given the “right” option as much as possible.

Someone can be a brilliant artist unable to afford the tools. Should the ability to do art be a right (ala education)? What about the ability to create documents (such as business plans) that are part of how we function. [I don’t think so, although I am a FLOSS supporter and would like to see more of this, I don’t think the government should be involved.]

Does the value of a piece of art come from the art itself or what you can do with that art? [I think both in all cases.]

Net Neutrality

Phone and cable companies (the major bandwidth providers) build business models based on having a smart network and dumb terminals – the Internet is a dumb network with smart terminals. Do telcos own the network? The government legislated their right to use the land, they just own the copper. The citizens really have as much right to it as the telcos. We can separate production from distribution and not let the telcos have such a monopoly (think deregulation of electricity).

Public Domain Registry

MediaWiki, Creative Commons Canada, and others are working to catalogue ( / host ? ) all Canadian books in the public domain. This is a hard problem since many books do not survive the long life of their copyright. This perhaps should be a federal issue (preserving culture) – but private funding works because the government will not. The “restriction” to Canada does not limit distribution, and so does not limit the uses of the project. It is important to be able to tell what parts of a work are PD (such as poems in a collection) as well as what whole works.

Music Business Models

Fading Ways Music is an indie label whose artists are all CC-BY-NC-SA [somewhat evil, yet so forward thinking]. They primarily use the traditional pay-per-unit business model though.

Jamendo is a site that hosts libre content music and shares ad profits with artists.

Don’t protect the existing models (ie, music tax) – instead come up with / use better models.

The “rights” of the public need to be balanced with the rights of the artist. One model is a take on the “freemium” model where high-quality content requires a fee-based membership (ala 76fanclubs and others). There are benefits, especially to the long tail, of such a model. (OH HAI, BTW, LONG TAIL ALWZ HAPPNS, LIEK ON BLOGZ.) There is a growing desire to remove intermediaries between artists and fans, yet the favourite tools (MySpace / Facebook / Last.fm) really *are* intermediaries (albeit automatic ones), but artists think it’s a pain to update them all and are turning to more layers of indirection again (can haz DiSo?).

Scaling Communications (or, the Right Tool for the job)

Posted on

I’ve been interested in different forms of communication for some time.  It’s part of what makes social networking so interesting to me.  I’ve been reading about others’ experiences too.  Of course there’s Tantek’s CommunicationProtocols page, which inspired my own Communication Protocols section on my main page (and, probably, @seanbonner’s PreferedMeansOfContact).  Trevor Creech recently challenged me on my Twitter usage, calling it “Twitterfail” (in reference to efail).  I would like to discuss some of they ways I’ve started thinking about communication.

First, a tip from my own main page: If you find a solution, from me or elsewhere, blog it. Someone else may benefit.  I have come to think of my Twitter and Ma.gnolia accounts as blogs (especially since they began to manifest their updates in the actionstream on my main page).  If I find an interesting tidbit, or have a potentially interesting thought, I tweet it.  This lead, one day, to >20 tweets in 24 hours, the condition which Trevor complained about.  In response, I have tried to consider first if something is really useful at all before I tweet (I don’t want to be the cause of a signal/noise problem) – but I have also started including better context in my tweets.  Interesting links go to Ma.gnolia.

Searchability has become key for me. This is one of the reasons I love my IM setup – everything anyone says to me, whether I’m online or off, at my computer or not, is archived in Gmail for easy search.  My tweets and blog posts are also searchable – in fact, if you just say “singpolyma” or link to me in a blog post, there’s a huge chance that I’ll see it.

When it comes to factors like immediacy, lifespan,  audience, bandwidth, and sychronity they are all important, but are different for different messages.  If I’m setting up a meeting or working on a project, immediacy and bandwidth are hugely important (thus, face to face or IM are best).  If I’m discussing something of interest to me, asychronity, lifespan, and audience are the most important factors (thus, mailing lists, forums, Pibb, and IRC are best).  There is no “perfect” communication form – all have their place.

I have requested that people use post/page comments for debugging/feature requests on my projects.  This is because a comment is almost as good as blogging something (it can be used by others who may benefit) and is searchable.  It also reduces the chances that I get asked for the same thing a bunch of times – others can see what is being discussed (which, incidentally, in the same reason I love GetSatisfaction).  Pages + comments are almost as good as (in fact, in many cases, I feel are better than) a wiki.

ActionStream 0.40 and DiSo Profile 0.25

Posted on

I have updated two of my DiSo plugins: Profile and ActionStream.

The profile updates mostly involve some code cleanup, a page here documenting it, and a new API to add permissions options to the permissions page.

The ActionStream update is a bit more extensive:

  • Support for coComment
  • Code cleanup, of course
  • RSS2 output option, linked from the stream output (add &full for a different view)
  • Reportedly working in WP2.5 with a patch I accepted
  • Better Safari support
  • If you disable showing your service usernames they are also hidden in the collapsed items
  • Abitily to set permissions on updates from each service (if wp-diso-profile0.25 is installed)

OAuth and XRDS-Simple in WordPress

Posted on

I’m publishing two plugins today.  The first is pretty simple in what it can do for users directly – the XRDS-Simple plugin allows users to delegate their OpenID to their WordPress blog – basically letting you log in on OpenID enabled sites using your blog address, but without needing to run your own provider.

On a far geekier level, the plugin allows other plugins to add XRDS-Simple services and other information (such as OAuth Discovery) using a progammatic API.  A brief example of this API is on the plugin’s page.

I am also releasing a more DiSo related plugin – WP-OAuth.  This plugin enables interacting with WordPress authentication using the open OAuth protocol.  This could be exciting if combined with AtomPub or another protocol / format supported by WordPress or another plugin.

OAuth Discovery

Posted on

Take 2! Now enhanced with XRDSs! Eran has blogged about the changes and the initial vendor support.  This plays right into my dream of infinite interop.  I’m quite happy about how small the spec is now that it just rides on XRDSs.  There’s some weirdness (need two XRDs, can have one XRDSs reference another).  Eran has explained his reasoning to me and it makes sense, but I’m still not convinced that it’s necessary.

Anyway, I should roll out a new XRDSs and OAuth DiSo plugin soon with support for draft 2.  And new examples.  There is an alternate PHP class that Eran says will be including support. I will probably use that when it comes out, but I’ll bootstrap with JanRain Yadis and the standard OAuth PHP class for now.

AWriterz Relaunched!

Posted on

I have re-written and re-launched AWriterz – my service for hosting creative works.  Basically, I took my Web 1.0 service and brought it to the new web.  Logins are now powered by OpenID and it tries to get some data automatically through SREG, SGAPI, and hCard.  There is a karma-like system to prevent spam (and eventually reward active users), and users can tag and rate every work (and upload there own).  I’m trying to emphasize CreativeCommons on the site, but I’m not enforcing it.

I’m really hoping to build more DiSo-like functionality in as the site progresses and I get more user feedback 🙂

XRDS-Simple and Infinite Interoperability

Posted on

Eran finally released the XRDS-Simple Draft 1 spec. Chris Messina has some great thoughts about how this fits into DiSo, and Eran has done some good explanations himself.  I’m just going to give some of my own thoughts.

First off, forget whatever you think about related standards, like XRDS or XRI. If you dislike OASIS, can it. I’m sick of the fighting and just want the web to *work*.

I’d like to talk a bit about my vision of infinite interoperability, which is facilitated somewhat by XRDSs. Little to none of this is implemented today.  I’m not expressing a pragmatic “let’s build this now” but a hope for what I would love to build in time.  I’m going to use brands familiar to me – if you hate them, pretend I used different brands.

Imagine this: Flickr is an OpenID consumer with OAuth as its authentication standard on all APIs.  They have moved all their content listing, posting, and editing features over to APP (yes, APP supports binary data, like images).  They are using XRDSs for discovery of the endpoints to *all* APIs.

I write a website.  This website lets you easily create collages of images from free sources for use in projects (I dunno, pick a more useful project idea).  You log in with your OpenID and create a collage.  You can print it out and embed it on your site, but you want to share with friends.  You click a ‘share’ button and are taken to flickr where OAuth authenticates you (maybe using shortcuts since we know your OpenID already) and posts the image to your account using APP.  It knows where all the endpoints are because of XRDSs.

How is this different than what can be done with the Flickr API today? Just wait.

Your friend sees your collage and wants to make their own.  They also come to my site.  They also click share.  They do not use Flickr, but rather Zooomr, for their images.  Zooomr is also OpenID+OAuth+APP+XRDSs enabled.

My site *does not* have to know about Zooomr.  You can simply entry into a settings box “zooomr.com” and my site with *automatically* (XRDSs) find the endpoints to authenticate (OAuth) and post (APP) and your friend can share their image.  I support four standards and get access to *every* photo sharing solution, without even knowing they exist or having to care.

Actionstream 0.35

Posted on

I’m updating too fast.  This release brings some improvements to the UI for adding services.  There are also now options to use the plugin as a sidebar widget or with at easy-to-include post/page tag.  This release also brings the recognition of a requirement of PHP5.2.0 or higher (after much debugging with James Kirk).

And, as always, bug fixes!

See the plugin page for more information /download. 

Bite the Bullet – CC-BY

Posted on

This week I was horrified when @adityavm moved from the most evil of Creative Commons licenses (BY-NC-ND) to All Rights Reserved.  As an advocate of free culture I began to realize just how dangerous it can be to walk that evil line of just-barely-free.  I took a long, hard look at my own licensing practices and decided it was time to open up.  All previous and future entries on my blog (unless otherwise noted) are now licensed CC-BY.  Let freedom reign!

Actionstream 0.30

Posted on

Some significant improvements to my ActionStream plugin. The plugin can always be downloaded from that page.  The changes are:

  1. Bugfix in how some feeds were handled (notably google reader)
  2. If nicknames are being displayed, they are hcards with links to your profile at that service
  3.  Including updates from your own blog is now optional
  4. There is now an option to remove services you have added
  5. Collapsed (5 more… et al) nodes may now be expanded on link click

Actionstream Plugin Update

Posted on

I am pleased to announce version 0.2 of my WordPress Actionstream plugin!

It can be downloaded from the normal place.

New this release:

  1. Better microformats support in the output
  2. Some architecture improvements and bug fixes
  3. There is now a sanity check for zero items or less items than requested
  4. Posts on the host blog are now added to the actionstream
  5. There is a well defined way to add stream items (say, from another plugin). Just create an array with the fields you need (be sure to specify identifier and created_on – GUID and unix time of publish, respectively) – usually includes title and url. Then instantiate an object of class ActionStreamItem and save in like so: (new ActionStreamItem($array_data, 'service', 'data_type', $user_id))->save();
  6. There is now a hook for other plugins to add available services. Example:
    actionstream_service_register('feed',
    
          array(
    
             'name' => 'Feed',
    
             'url' => '%s'
    
          ),
    
          array(
    
             'entries' => array(
    
                   'html_form' => '[_1] posted <a href="[_2]" rel="bookmark" class="entry-title">[_3]</a>',
    
                   'html_fields' => array('url', 'title'),
    
                   'url' => '{{ident}}',
    
                )
    
          ));
    
    

On Actionstreams and Blogging Your Own Work

Posted on

So first, the plugin.  I have basically ported the MT Actionstream plugin to WordPress (you can see it in action on my main page).  This is pretty cool, and it means we can easily share settings with other efforts.

New in this release is the ability to import a list of services from another page (like MyBlogLog or  FriendFeed) using SGAPI.

Code lesson: sometimes the WordPress docs lie.  They say that you pass a function name (or array with object reference and function name) to wp-cron hooks to schedule regular actions.  Not true.  You pass the name of a WordPress action (added with add_action).

Blogging Lesson: Blog your own work.  This plugin has been covered by at least four blogs now (more than most of my stuff) and not yet by me.  I just posted the plugin on the DiSo mailing list and people liked it.  I’m not complaining, but I’ll definately post my own stuff up front in the future!

JSONP in IE

Posted on

Another post based on a previous tweet. This took me at least an hour to debug, so I thought it might be worthwhile sharing.

IE, apparently, gets unhappy when you append nodes to the end of a node it hasn’t finished rendering yet. In practice, this means it blows up when you say document.body.appendChild before the page has loaded. The easy solution? Append to a node that has already loaded! What node is almost guaranteed to be there when the body is rendering? The head node of course! Here is code:

document.getElementsByTagName('head')[0].appendChild(script);

Content Publishing Protocol

Posted on

Since APP (understand here) is mainlined in WordPress, it makes sense to use it in DiSo efforts.  I doubt that my OAuth plugin will work here, but it’s worth testing.  It may mean using headers, but with comment and discovery support we should be able to build a distributed commenting system, at least for WordPress.

I’ve thought about other APIs that would be useful for DiSo.  For example, adding friends or groups.  APP does not fit this, but the general concepts do.  Perhaps APP can be abstracted into more of a CPP.

GET on main endpoint to list items (ATOM can always be the main wrapper here).

POST to main endpoint to create new items.

PUT to node to edit.

DELETE to node to delete.

Authentication unspecified (HTTP Basic or OAuth work well).

If the content of your POST and PUT requests is ATOM, you have AtomPub.  The same basics can easily work with other content.  (The other content types could be encapsulated in ATOM entry bodies on the GET list, or XOXO).

For example, a POST body of XFN+hCard could add a friend.  A PUT body of hCard could edit a profile (ie, to add groups).

I would also like to suggest that POST on a node could be used to add comments (create new content on a content node).

Why the SGNodeMapper is a bad idea

Posted on

Don’t get me wrong, I love Google’s Social Graph API, it’s a great way to speed up the discovery of XFN data by using Google’s cache.  What does not make sense to me, however, is their ‘NodeMapper’ concept that is built in to the API.  It maps multiple URLs from a site on to, not a single URL, but a SGAPI-only URI scheme.  It maps using URL patterns that are known about the site, so it doesn’t even work on the web in general.  When it does work, what is it useful for?  URL consolidation.  The problem is, that the only thing you can do with a nodemapped URI is (1) use it as a unique key or (2) turn it back into a URL to get data.

I don’t get it guys.  How is this better?  Is there even a reason to consolidate things like FOAF files backwards to the main page, since most people will enter the main page itself as input anyway?  Even if it was useful, shouldn’t it actually map to the main page and not to some proprietary URI scheme?

Thoughts?  Anyone see a use for this that I’m missing?  Or is this misfeature just adding a layer of data that someone might use and that we’ll have to hack around again later?

SGFoo Digestion

Posted on

I got back Manday morning from SGFooCamp.  This is a sort of himp of my thoughts of the event and the results.

The first thing, for me, was the actual networking that went on.  I met lots of people that I’ve followed online for some time and many more. The informal discussions that “just happened” were, I think, informative to all.

The talks I attended were all excellent.  I was inspired with a number of ideas that will make it into my DiSo plugins.   A lot of insight goined into what users expect vs what I tend to  to think of as a Geek.

Two specific points of awesome:

1) Talk-turned-flame-war about DataPortability Workgroup.  Good to see the air cleared at least. Way too much hype there.

2) Witnessing one of the first usefully federated XMPP PubSub messages.  Seeing just how fast it can be.

If I got one thing from the (un)conference it would be the value of a demo.  Onward to DiSo hacking!