Northern Voice 2008 Day Two – accreted notes

13:30-14:10

Alan Levine. cogdogblog. voicethread. the internet is really big.

Lost in Vancouver

really hilar cinderella story told through PPT.

http://cogdogroo.wikispaces.com/StoryTools

jumpcut=imovie in a web browser

googlemaps api with Flickr, blabberize.com

———

14:15-15:00 –

Kris Krüg and Alex Waterhouse Hayward. “The Other Side of Two Dimensions”

lots of pictures. thinking in 3D is what we do with digital photography and not with analog?

Kris: what are we losing?

Walter Benjamin is sadly not present (e.g., these same arguments were leveled at photography when it emerged, versus painting). the debates runneth under?

———-

15:30-16:00

Susie Gardner – widgets

gawd the wi-fi sux.

slick Dilbert widget.

yeah I’m done buggering around with my prez (many fuzzy pics due to compression needs – some content revised from Nokia presentation last Fall, but much more added in).

Last.fm widgets. How to put it in blogger.

etsy – building widgets. you can sell things

librarything.

polls and surveys. polldaddy.com. thisnext.com

Northern Voice 2008 – accreted notes

SkylightsRough notes from today’s conference…

13:30-14:00: An afternoon discussion with a Vancouver City Planner, a rep from mag.no.lia, some guy from Portland, and many others.
the challenges for cities=adopting social media for public participation initiatives.

how do people get something back? more than just “thanks for coming”. stewardship is important. but how do we get people unplugged and back in the community, actively participating.

Irwin: it’s a conservative time, politically. lockdown mentality within city staff and bureaucracy.

___________

laptop labyrinth14:00-14:30 – TransitCamp: sign the letter. translink is not open. we want open data.

______

14:30-15:00

http://mobile.shoeco.biz. How do people learn on mobiles? Awesome. Low turnout, however.

shoeco is a fictional shoe store mobsite. how would such a propietor learn how to use it? using wordpress with mobiles. it works.

simple wordpress widget.where can I d/l it?

lappy room onlymy post didn’t work.

interfaces are “brutal”

how do we condense knowledge into mobile-sized pieces? that’s what he’s seeking to solve here. – how would i do that “while i’m driving along”?

——–

15:30-16:00: Megan Cole’s Social Media

couldn’t hear anything, froze almost to death

——-

freezing to death

Chris Heurer and Roland Tanglao on mobile blogging

mobile blogging=video, audio, multimedia blogging

what tool is appropriate for what circumstance?

I asked: why the candid sharing of media? why not work with a script? Roland thinks it’s a difference btw old/new paradigms. I’m not so sure.

publicity/privacy issues act as a mediating membrane of sorts.

Irwin

Scott & Roland

Kate in Terminal

The PhD – the comprehensive exams

So I’m diving straight into my comprehensives now. I’m building lists and checking them twice (and more). While building these reading lists is in many ways a very personal journey, I’ve decided to blog about the process so that I might get feedback from unexpected locales, harnessing the “wisdom of crowds” (while simultaneously, in both comp areas, critiquing how such “wisdom” is in fact problematic). I also hope that documenting the process can help others through it. I won’t be posting my full notes (who’d read them?), but I will share my definitional essays, my questions and answers, and an account of process along the way. I’ve written up preliminary préces of my two exam areas below.

Area 1. Science and Technology Studies. This will involve SCOT, ANT and other critical theories, but it will also dovetail through Philosophy of Technology (I’m thinking Heidegger through Marcuse and Feenberg, and also including Ellul and others, perhaps a few off the beaten track – we’ll see). I want to ask questions through this literature about the relationship of technology, power and social organization/social change. There are also some intriguing connections with Area 2 (below) via Hennion/Latour (their work on culture industries), and, according to one of my advisors, Habermas’ Public Sphere as well as Lazarsfeld.

Area 2. Culture Industries/Sociology of (The) Art(s). Starting with Hesmondhalgh’s (2002) synthesis of political economy of media studies with cultural studies and other approaches to the sociology of art, I plan to broaden this area out to include American and Continental approaches to the study of cultural/creative industries. This will likely include a range of approaches, including Bourdieu (various), Becker (various), and DeNora’s more recent work, but this area is still under development. There are important connections to literature on occupations (Balfe, Latouche) organizational studies (Throsby, Sacco, Menger), and of course The Frankfurts. Essentially, as I told one of my advisors this morning, I want to survey the body of work that theorizes culture industries, without limiting myself to a particular tradition (as approaches vary broadly). Keep in mind that I wish to keep this area current, as well, and as such I will need to make room for Eglash’s (and others’) work on appropriation, as well as Jenkins’ (and others’) work on fan culture. Sprawling enough? It makes sense to me.

My advisors, colleagues and friends (as well as the “social web” hoi polloi) can feel free to jump in any time. Like you have any time.

Single Sign-On and Content Aggregation: a Preliminary Analysis of their Potential in Facilitating Progressive Social Change

What is the relation between the technology of single sign-on and community mobilization?

There are two approaches to – or models for – the twin issues of convenience and security in our current era of mass content browsing: (1) single sign-on (OpenID, MicroID) and (2) content aggregation (Jaiku, Pageflakes, Readr). Both solve certain problems in terms of managing content communities and users. There is no reason why either model cannot be employed to accomplish the same goal of mobilizing and invigorating communities – politically, culturally, environmentally, socially, and so on. Essentially, both approaches enable the construction of activity streams that users can publish, share, syndicate, and read.

There are crucial differences between these two approaches, though, which bear implications for their social deployment. Single sign-on puts users in a position to conveniently sign up for numerous applications and web services with the same ID (reducible to an email address, URL, or phone number), while content aggregation streams users’ many different accounts into a single location, giving them the freedom to import and export feeds from other sites (ranging from the lightweight Jaiku to the sprawling, all-encompassing Facebook).

The two models differ by degree. The amount of “in-house” functionality offered in content aggregating services makes a crucial difference in the ways in which these technologies are adopted – their bias, of sorts. Whereas OpenID simply authenticates “who you are”, Facebook more intensively mediates self representation by deploying its own services (messaging, mobile updates, profile pages, and so on), and by inviting developers to build little gates into its fenced (though not quite walled) content garden.

Facebook is also letting developers decorate the place with garden gnomes and suchlike. It is worth considering the potential negative implications of development solely for one corporately-owned platform, either presumptively (Facebook) or retroactively via buyouts (Google’s approach, e.g., their recent purchase of Jaiku). As a sidebar, these represent more systemic problems on the horizon of social networking and social change – the traditionally uneven political economy of the information economy, which keeps growing its corporate heads back like a hydra, no matter what utopian promises are being made at any given time by any particular indie widget pusher. But I’ll save that discussion for a later time. Here I am concerned strictly with considering which of two technical models might be more specifically appropriate for the function of enabling social change via open content and communications.

Ultimately the Facebook model (fenced in web within a web) will fail in competition with services that leverage the myriad multiple devices and software platforms that currently populate the growing mobile technical ecosystem. With no clear standard for operating systems on mobile phones, there is much work to be done to enable everyone to talk to everyone else. On the other hand, application developers have an immense opportunity to build tools that facilitate syndication and sharing over thousands of different mobile devices and networks – and crucially, in effective community mobilization crossing boundaries of culture, geography, and social class, this involves devices that range from iPhones and Nokia N-series computers down to the lowest tier SMS-capable phone. Given this variability, the argument for an authentication protocol that is as much as possible only that – an authentication protocol – and not an “environment” like Facebook (a web within a web), is more palpably constructive, in terms of bridging divides.

Where the goals are social, cultural, and political, primarily – mobilizing communities to create and share mobile generated content with the underlying aim of improving people’s lives in tangible, measurable ways – this can take many forms. The definition of “community”, or the definition of the user group is crucial here. For instance, a mobile web services platform can accelerate citizen activism (sousveillance of arrests and/or protests, which has proven highly effective in providing a limited “fourth estate” that keeps police, government officials and other powerful entities in check, a count on which traditional mass media has failed miserably). Mobile web services can also help invigorate communities of independent musicians and music audiences, providing platforms for content and fan-artist-remix interactions on-the-go. And, mobile web services can enhance and amplify existing community cultural infrastructure, something Mobile Muse 3 specifically aims to accomplish with its development of projects in partnership with cultural organizations around Vancouver and the Province of B.C. For all these instances, single sign-on and content aggregation provide good models for coordinating clouds of user data generated into a navigable, mappable semantic space.

There are other models of community media, however, that call into question the viability of single sign-on, and that point to content aggregation as the better model. In particular, there are two: contexts where identities are divisible, and contexts where identities are combinant.

Combinant identities
In many rural communities in the Third World, mobile devices are shared – by couples, by families, and in some cases by entire villages. In cases where the intent is to distribute and share not only the software – along with the text, images, audio and video carried over the mobile media service – but also the hardware (the phones), single sign-on poses complex problems. How do multiple users properly authenticate on a shared wireless account/phone number? How could a single phone be configured to accept multiple accounts? Obviously, swapping SIM cards doesn’t get around the problem, as this necessitates the purchasing of multiple wireless accounts. In Vancouver’s downtown east side, for instance, how would a shared mobile infrastructure (including shared handsets) work? Wi-fi phones – such as Nokia N-series phones – are only a partial answer, as wireless internet is not (yet) ubiquitous in Vancouver, and effective use of mobile browsers is not enabled by the current applications available for N-series phones or their competitors. In short, a combination of protocols (SMS, MMS, Bluetooth and Wi-Fi) are the best bet – enabling as many connections – both free and paid, both easy and challenging – as possible.

Divisible Identities
The second scenario in which single sign-on fails is where users deliberately maintain multiple identities within the same or different sites or environments. Youth and marginal and/or vulnerable populations may require these multiple identities in order to manage the diverse range of social contexts they must occupy in order to survive (for example, keeping family, friends, work, and school distinct even if overlapping).

There is also a third possible area of identity management (a mashup of divisible and combinant, if you will) that comes into view in the present analysis – the recombinant identity – particularly important in a (post- or non-modern) world of dynamically shifting alliances and antipathies – where identity may be continually shifted to accommodate a diverse range of individual and (recombinant) group needs and goals. Here, too, the notion of single sign-on cannot compete with the contextual flexibility afforded by the model of content aggregation.

As a structuring model for the development of community mobile services, single sign-on is problematic in terms of how it reduces individuals to indivisible and noncombinant entities. Content aggregation seems to be a far more viable model for community building and mobilization, as it is adaptable to a wider range of social and cultural contexts – in which identity may be conceived in different ways, or in which it may simply manifest in technical networks in different ways due to the exigencies of human survival.

Ex-Perry Mental Geekery

Nice to be back in the swim of things. I just put a final report out the door on a research project that I’d been working on for 14 months. It was a difficult project – one that didn’t always go as planned, that got intermittently sidelined by other events in my life (buying an apartment, having a first baby), and included a whole feeling of responsibility and guilt unlike any other research project I’d ever worked on. More than anything, it was in a research area worlds away from poopular (yes, I mean poopular, it’s not just the baby talking) music, which is my number one research passion.

I’m not going to divulge any more details about that project here (details of it will soon be published elsewhere), but your takeaway from the above blurb should be that now that the project’s done, much more of my time can be devoted to my work in music and my work in mobile – both of which are central themes of this blog.

To wit, I’m TAing a 3rd year course in popular music studies this semester, and the gearing up is invigorating. We’re doing something of an experimental “taste laboratory” of sorts on Last.fm. I’ve invited the 76 students in the class to join so we can have some healthy backchannel in a music-rich environment. We’ll be sticking to the books and lectures in tutorial, but I figured having this optional addon for students who are so inclined can be instructive, and perhaps give some students some concrete experience with which to grapple whilst reading Hebdige, Attali, Adorno, McRobbie, and others (PS – I didn’t design the reading list, so if you have a problem with it, take it up with the Sessional).

The other thing I’m diving into now is a short ethnographic study (yes, the third in a series) of mobile phone use, using the Nokia 95. I’ve been playing with one of them for a couple of days now, and I am quite impressed with its ergonomic design. Something about this phone feels just alright, as Lou Reed would say. However, the phone keeps crashing when using the built in photo gallery app. Looking for a workaround.

Oh, and of course, there’s the upcoming AOIR, which I’m helping out with (and presenting at).

I’ll be keeping you posted.

On Last.fm and royalty payments

Clear Channel didn’t get away with it, and now Last.fm is taking heat for not paying out royalties to independent artists. Last.fm, recently purchased by CBS, is now heating up indie music business blogs with this policy, even though it’s been in place since the company started.

Why so, asks the intrepid indie music biz blogger/Last.fm enthusiast and indie label/band person? Well, it seems there’s some misunderstanding of how royalty collection works. Last.fm is in fact playing by the rules, paying royalties to collection societies when tracks are streamed.

The big difference between Last.fm and conventional radio (and indie labels and bands should take note) is that with Last.fm, playlist/track streaming statistics are not hidden from public view, and do not rely on the inaccurate and gameable conventional sampling methods used by groups such as BMI, ASCAP or SOCAN in tracking radio airplay. And it’s not a closed pay-per-stat-view shop like Big Champagne is. As if that weren’t enough, the problem of payola is curbed via the voluntary ‘pull’ nature of “airplay” on Last.fm. The critiques of radio cannot be transplanted to a service such as Last.fm so swiftly. It is simply a different animal.

And anyway – wasn’t the hulaballo about Clear Channel over the issue of payola in the first place? Lest we forget, “Clear Channel had responded to allegations of payola with a pay-for-play scheme“.

This is not to say that there’s nothing about which we can be critical with this Last.fm thing. I’ve blogged this previously, but I’ll say it again: it matters who owns what in Internet 2.0. And even though it feels like listeners are running the show on Last.fm, they might not be, and probably aren’t. Every boss must manage, and every company must profit, or die. It seems that the most important question is still – to invoke the terminology of radio, new and old – are we really “streaming” or are we being “programmed”?

Last.fm, CBS and the future of music

OK, I was going to take a lot of time and write a measured and considered manifesto, but in the spirit of the impulsivity, that, according to my friend Jason, haunts, and characterizes the blogosphere, I’ve decided to have a little blurt and then go enjoy the blistering West Coast sunshine. Blogs are for blurts; journals are for more careful screeds. Or maybe I’ll think differently tomorrow morning.

Last.fm was just bought by CBS Corp for a whopping $280 million US. According to the site’s blog, CBS “gets it”, which can imply a lot. Or not. Let’s try and untangle this problem, shall we?

Last.fm’s brand image is cloaked in thought-choking terminology including “the social music revolution”, “the wisdom of crowds”, “discovery”, “exploration”, and “sharing”, the usual stuff of second generation (Web 2 point oh) utopianism. So what would it say about CBS’ strategic vision, to say that the company “gets” this? To my mind, it indicates the following:

CBS wants to monetize what they predict will be fundamental changes in the way music is distributed and shared – changes that center around the evolution of taste publics, and Last.fm – with its mode of connecting users into taste publics, and its recognition of the way music earns value via its social contextualization – is the most valuable indicator about how these processes unfold.

I agree that Last.fm is probably the best indicator yet developed (partly due to the fact that it has scaled to a large user base, which means that it has now become useful to CBS by having generated a critical mass of aggregate data) of how taste communities are formed and evolve.

And like Nancy at Online Fandom is, I am also uncertain whether the users of Last.fm will revolt against the company, fearing ads, the commoditization of their listening habits, or whatever other evils of the commercial Internet they may anticipate. I doubt there will be much of a backlash, so long as users derive value from it and it does not devolve into a top-down or non-neutral barometer of taste.

But what is the potential impact of this buyout? I think the answer, for now, resides in questions. I’ve thought of a few, anyway:

Now that a major media company has thrown its money behind it, how will CBS reproduce its business model through the Last.fm paradigm of music consumption/dissemination? Will CBS be responsive to changes in taste, or will it still try to foist dull things upon listeners? Will they respond to a supposedly “organic” evolution of taste publics, or will they steward or even manufacture these publics? Will they adequately represent minority/fringe interests, or will these suffer from the tyrrany of the American Idol-obsessed “majority” of music fans?

How will Last.fm account for the distinctive modalities of cultural reproduction in different taste publics? Consider the use of fan fiction in sustaining fan communities around particular bands or artists. This is a mode of social reproduction that works by a logic that simply doesn’t exist in other musical circles (consider art music, or folk traditions, where dissemination depends on a whole other set of genre rules).

And finally, how will the CBS Last.fm make use of indie content on the site? Currently, labels and artists on the site enjoy a you-get-the-organic-viral-stuff-for-free-but-you-can-pay-for-real-promotion system (which is far better than the opaque system over at the News Corp. Myspace, which hand-picks featured content, following supposedly ‘old-fashioned’ music industry practices*).

How will independents fare under the new arrangements? I think that the minute Last.fm gives indies the short end of the stick and emphasizes featured content (they already do this a little bit, using banner ads and other paid promotion features), the accuracy of their engine for tracking user tastes depreciates. Additionally, under such conditions the site becomes more an instrument of managing or “programming” taste (to borrow the evocative, and appropriate terminology of conventional radio) than an instrument that reflects it – in Gramscian language, a hegemonic institution rather than a site for counterhegemonic resistance.

I acknowledge that these questions could also be relevant to a discussion of the pre-CBS Last.fm, which was built-for-buyout, despite the obvious benefits users derive from it. But the buyout forces these questions into the spotlight, and there’s no better time than now to raise them, while everyone’s paying attention.

Comments? Questions? BS barometrics? Holler back.

happy clicknoise new year

This blog launched exactly one year ago. And it’s still here. Hooray. Unlike zefrank, it’s not ending. 12 months is still below average compared to the 33.8 month average lifespan of the Top 100 blogs. Of course, stats like this are still unreliable indicators – remember that we’re only just starting to enter the long tail of blogging, as its growth is only now just starting to slow down, both in terms of number of new blogs, and the post rate, which are both stabilizing. If that stat indicates anything, perhaps it shows that I’m 21.8 months behind the trends. I guess I’m not as cool as my Facebook profile says I am.

Anyway, a year’s a year. The Earth spun around the sun again, and we’re back in the same relative position to that big ball of gas as when I started (although apparently that spot is not so much the same sort of place it used to be, anyway).

Bad timing of me to take a 2 week hiatus, on a one year anniversary. But it’s just as well, as most the most valued web traffic is about yesterday’s Virginia Tech massacre right now, and I have nothing to say about that that hasn’t been said already.

Once the semester’s over (in a coupla days), the backlog of stories and news items will be uncorked (of course, that will only happen after the coming weekend’s libations have been uncorked, drank, passed out during, moaned about over breakfast, and then green-tea’d out of existence again). Lots to catch up on – we’re even getting our own version of the DMCA up here in Canada, and here I am writing term papers. Well, one more to go, anyway.

Coincidentally, April 17 also marks the anniversary of my cessation of smoking. 3 years without a twitch.

Anyway, chat with you soon.

Wikis, Authorship, and Botdom

Last week I experimented briefly with content syndication on Clicknoise, part of a wider campaign of mine to tinker with WordPress plugins. I accidentally succeeded in syndicating posts from Inner Ear Infection, where my friend Bruce writes. Surprised to have gotten the php correct, I quickly removed all of the new content from this blog, as I hadn’t asked permission to syndicate it (and didn’t really want to do so anyway).

As you can probably tell, I’m an “old soldier” when it comes to authoring and crediting sources (and here I’d like to credit my friend and collaborator Phil Western with that phrase, as he refers to himself as an “old soldier” with music). I’m deeply attached to the things I originate, and the geneaology of works of art in this age of electronic re/coproduction is equally troubling to me as it is exciting. While I believe that co-produced knowledge (e.g., wikipedia, nowpublic, del.icio.us) is a truly emancipatory project, when it comes to artistic production I am less convinced. No historical example of collective authoring satisfies me; I obstinately refuse to give up my stake, my investment in things I’ve crafted. I get my hands dirty in some source material and turn it into something else. If someone grabs the thing I’ve just made and transforms it, but fails to mask it in their own personal aesthetic contrivance, I think my feeling of being ripped off is justifiable. But most bootleggers don’t do that, do they? I hear tons of mashups, most of them flat, derivative dross. The perceived value of mashups might depend merely on the recontextualization of well-known works (and, as Cory Doctorow has suggested, their appeal possibly also depends on the perception that the appropriation is subversive or illegal).

I was just reading this, and while I find much of it to be overblown (Wikipedia – not “the” wikipedia – is a community of editors – many of whom are well-known to each other, hardly an example of the idea of the “hive” of automatons Lanier is erecting and whacking at like so much pinata), Lanier’s invocation of the value of people, of authors, of those who take responsibility for their utterances has a deep resonance for me when applied to art.

Of course, I’ve blogged about this before, and then again, after that.

I might launch a parallel syndication-only blog in the coming weeks as an experiment. I’d like to see what a bot can do using the same RSS feeds and random google searches that I use to churn out Clicknoise in my old skool manner. The only remaining question – what to call it?

Cory Doctorow Leonardo Lecture at SFU

Cory Doctorow
The Faculty of Applied Sciences at SFU warmly welcomed Cory Doctorow to deliver our Leonardo Lecture for 2007 (entitled “The Totalitarian Urge: total information awareness and the cosmic billiards”).

It was a packed event, and a great speech (the same one he’s been delivering elsewhere of late). Ivaniv at Blogaholics has posted audio from the event.

Audience

Some of we grad students got a chance to chat with Cory earlier in the day. The man’s an encyclopedia, able to switch from discussing the feudal commons to discussing authoring collectives to marketing to activism in rapid succession. Although there was insufficient time to thoroughly articulate my critique of the application of open knowledge paradigms to artistic spheres (I will be continuing the dialogue over email, Cory), it was a great opportunity to converse in person with someone who’s been so entrenched in free culture activism as him. Kudos to Barry Shell, Richard Smith, and Brian Lewis (and whoever else I’ve maybe missed) for bringing this event here.

More photos from the day are on my Flickr page.