Back on 6th March we had a really excellent McShib day up in Edinburgh. Despite the event being really good, the plane managed to break my laptop on the way home which means I don’t have the presentations or the saved hashtags for the event so this is just going to be about thoughts I had from the day.

For me the most impressive presentation was from the RAPID project, which is looking at a practical implementation of RAPTOR at the University of Newcastle. As well as a very clever project logo, Richard and Chris gave a really excellent overview of the project, introducing a whole range of new ways of using RAPTOR for monitoring including PC Cluster room usage and application usage within the university.

Phil Smart from Cardiff was on hand to talk about WUGEN – a WAYFless URL Generator created by Cardiff University. WUGEN is still a pilot concept and JISC is in the process of analysing the final report and talking to the UK federation about a permanent home. There is a test instance of WUGEN available here, but please be aware that this is a TEST instance only and not a permanent service! Interestingly, such a generator came up as one of the favourite ideas for the federation administrative interface.

For my part, I was at McShib not only for the interesting content but as a plea for help in designed an administrative interface for the UK federation. This is what happens when you ask a bunch of McShibbers to brainstorm for you:

Andy's Team at Work

The session was excellent and came up with some really good ideas, that actually translated into something like a design proposal at the end of the day:

post-it notes are us

We’ve taken these ideas and created an ideascale from them – we would really like to see more of your ideas and thoughts to add to this…and hopefully more news on this soon. Generally though I’d be interested in feedback on whether you think an interface for the federation is a good idea – what do you think?

Federating the Researchers

If you don’t have time to read all of this blog, I will jump straight to the chase! If you work in a library, we would love you to fill out this survey. If you are a researcher, we would love you to fill out this survey. Both will help direct an EU study that is trying to improve access and identity management within research and for researchers.

If you have some time, read some more below!

In cooperation with TERENA, the University of Amsterdam and the University of Debrecen, LIBER is conducting a study which will explore the conditions for the implementation of a single European access and authentication infrastructure (AAI) for research information, or put more simply a ‘researcher passport’ that will allow European researchers to access all the research resources they need with one credential.

I’m really glad to be taking a small role within this study, as an expert advisor from the web SSO environment. Access and identity management is particularly complex for researchers in the current environment as they struggle with affiliation to host institutions, research groups, virtual groups, social groups – and the ongoing battle of just accessing the output of research in the traditional journal form. Throw this together with network access and the complexities of access to high-end computational resources – and it’s a bit of a mess.

This study comes directly on foot of the Riding the Wave report, in particular the recommendation to create a directive to set up a unified authentication and authorization system in order that researchers from any discipline can find, access and process the data they need. Within the context of our study, these data encompass not just primary scientific data, but all data that a researcher needs to conduct research.

The surveys asks you to think about what a ‘research passport’ (as proposed by the report) might look like.

The first survey is for libraries: We would particularly like institutional repository managers and librarians providing research support (e.g. subject librarians) to fill this survey in.

The second survey is for researchers. We would appreciate it if you could send this on to your researchers and/or put it on your library website.

The results of this survey will help form recommendations for a directive for the implementation of a single access and authentication system for research information.

This work is nicely completed by recent activity lead by CERN to examine the role of federations in supporting researchers. This group has produced a really useful paper, that I’d urge you to read if you are interested in the area.

The RAPTOR Fences are Out…

Although it’s been a while since the software hit v1, today sees the formal press release for the RAPTOR project – so I thought it would be worth a quick update.

RAPTOR is a usage statistics tool that you install locally. It basically reads authentication logs, and presents them back to you in a friendly way, enabling you to track usage and create management reports. It has broad applicability – at the moment it tracks Shibboleth and EZProxy logs, but the tool can be used for things like eduRoam, OpenAthens and the emerging Moonshot project as well.

One of the most frequent questions we are asked is, what is the difference between this and JUSP? My answer to this is typically that RAPTOR is broad and shallow in its analysis, where as JUSP is narrow but deep. RAPTOR only gives you information about number of authentications – but it can track ALL of your resources. JUSP gives much more detailed information about a range of actions carried out by specific users, but is restricted to a range of e-journal providers. Both tools are equally valuable to librarians and managers, and we hope in the future that more work will be done to help pool the information from both systems. There are also obvious synergies here for both tools in terms of the JISC KB+ project as well.

Another useful aspect of RAPTOR is that you can use it to compare back to other information held in your directory – so you can track usage by departments, or year, or even attainment grade if this information is stored.

I’d urge you to download the tool and have a play. The creators have assured me that the installation process is so simple that even I couldn’t break it….a claim I may well try out at McShib! Whilst mentioning McShib, it’s worth pointing out that we will be covering RAPTOR at the event and the RAPTOR team is also planning some workshops later in the year.

Another point worth making is that we are hoping that institutions will take advantage of the ‘aggregate up’ function in RAPTOR. This allows you to send appropriately anonymised data up to aggregation point, where it can be compared with other RAPTOR instances. We are looking at using this feature through an aggregation tool at the JISC Monitoring Unit, and it has the potential to give us clues for the first time on the national picture of resource usage that will help inform decision making on a much broader level.

Finally, the current JISC Digital Infrastructure call is looking for people to pilot and evaluate RAPTOR – so you can even get paid to see if this tool is valuable for your institution. That seems a pretty good deal to me.

The Business of Being Open

‘Business’ is often seen as a dirty word when talking about open solutions – whether that be open access, open data or for me open source software. It’s amazing that people always seem to think that the two things don’t go together…but of course for anything to survive in any practical sense, someone has to be paying for it somewhere.

This has pre-occupied my mind a lot over the last few months as we look to move the Shibboleth project in to a new business model, supported by the Shibboleth Consortium. When we first started talking about a new business model, a lot of people immediately thought we were going to start charging for the software – this has never been on the table for us at all. It’s more than Open by Default, it’s just unquestionably Open.

There are of course a miriad of funding approaches for open source – from the projects that run purely on donated time by people who love what they are doing, through membership schemes and supported models, where the product is free and users pay for support or consultancy. There is one thing that is similar about all of these though – you pay for the labour and not for the product. I think it is entirely fair that hard working programmers do actually get paid at some point, particularly when the product – like Shibboleth – has an international market that runs to millions of end users. It’s entirely possible to do this without flogging a product by funding the service of software creation and not the content itself.

What comes below will be obvious for those of you who work regularly on open access, but for me comparing it to the standard way of working for open source providers was a helpful and cathartic exercise. Apologies if it sounds like teaching you to suck eggs 🙂

Following the recent Elsevier furore from a distance, it seems to me that this is where the publishing industry has everything back to front. There are four key parts to the work undertaken in the academic publishing cycle:

  1. The work carried out out by the researcher / author that leads to a proposed article. Typically paid for by a research grant or institutional wages.
  2. The peer review work carried out by researchers worldwide. This is more tenuous to define, but lets assume this is paid for by institutional wages or just plain old good will.
  3. The administration of the peer review process. This is paid for by the publisher.
  4. The hosting arrangements for the journals. Again, this is paid for by the publisher.

Pulbishers are quick to cry out ‘we add value!’ and of course they should be entitled to be paid for the chunk of value that they do add – i.e. steps 3 and 4 above. This can be done by paying them for the labour and service, with a sensible overhead. I believe a payment for labour would also help improve quality. Whilst I think most researchers appreciate the work undertaken by publisher staff to coordinate the peer review process (often a thankless task), the quality of the hosting arrangements is often poor – publisher websites tend to care very little about being user friendly and optimising results for searches. If they were paid for labour and service rather than content, would this improve?

So yes I think it is entirely fair that publishers should be paid for where they add value, but this value has to be of the same high standard expected of the authors. It can also be achieved without having to sell the content, but by selling a service back to the community. I also think it is reasonable to make a commercial profit on that service, even whilst noting that no-one makes a commercial profit on points 1 and 2 at the moment.

My point being, its perfectly possible to run a profitable business model without forcing researchers to give up their own content, sign their rights away and then forcing institutions to buy the work back from a publisher. It would be akin to IT staff within an institution writing code for Microsoft, giving it away for free, and then spending an institutional fortune on buying Microsoft licenses. That would be crazy, right?

As for us, what are we trying to do? Well Shibboleth has always attracted money in some sense or another – predominantly through grants from Internet2, JISC and SWITCH. We recognised that we wanted to spread the burden of the cost of developing and maintaining Shibboleth, so we are establishing the Shibboleth Consortium. The Consortium will welcome both ‘sustaining members’ (i.e. organisations making significant contributions that help us keep afloat) but also smaller donations (as a sort of ‘I appreciate using your software’). The obvious goal is to achieve enough funding to address the current Shibboleth roadmap, the ideal goal will be to achieve more funding so we can add more hours to the roadmap. I’m obviously nervous about making this work but feel calm that by funding labour and not product, we are offering real value to the community.

WAYRN: Where are You Right Now?

Anyone who has worked with federations will be familiar with the term WAYF – Where are You From? This is the question you are asked so a service provider can identity which institution you are affiliated with. As a term it’s not so accurate – am I really ‘from’ King’s College? – but as a concept it has helped explain the process in relatively simple terms to non-technical people. The keen eyed among you will have seen a generally tendency to refer to the WAYF as the ‘Discovery Service’ these days, a refinement of terms that always happens as services mature.

However, what happens if I *really* want to know where you are, not where you are from, but where you actually are at this minute? We’ve tended to rely on IP address checking to make this possible, but it has many problems. It means that Service Providers have to maintain and update a list of IP addresses for organisations – JSTOR recently told me that they have up to 3 change requests for IP ranges per day for their services (globally, not from the UK). It’s something that you have to remember to do if your IP range changes, and that depends on the right people being told that changes are occurring. We know it is prone to inaccuracies and human error – a certain provider was for a period of time convinced that the JISC IP range belonged to Bournemouth University. Finally, IP address doesn’t actually give you any interaction with an individual as it applies access indiscriminately to the machine and not to the user, so personalisation, customisation and other identity management features are not possible.

These problems are magnified in the schools sector, where any Service Provider may be dealing with literally thousands of schools customers. There are also even more reasons within the school sector as to why it’s important to know that a logged in student is actually within a specific IP range in terms of serving content to children.

A short while ago, the schools representatives on the UK federation Technical Advisory Group approached the federation staff and asked if it would be possible to include a location assertion in the assertions made by an IdP to support the use cases where geographical location was important. The technical team and EDINA got to work and I’m pleased to say that the UK federation will be commissioning development of a location assertion to meet these use cases. As well as supporting many use-cases within the schools sector we can see places where this could be more broadly used, such as to support walk-in access.

For those interested in learning more, Ian Young recently presented his findings to the TAG and the slides from this talk are below or from slideshare directly. Development work will start shortly, so keep an eye out for further information and updates. If you would like further information on the work, drop a line to the UK federation helpdesk.


…and here are some useful slides from Owen Stephens on this topic from way back at #FAM09.

On Frictionless Sharing

This is one of those posts that could be a response to someone else’s post but got so long, it’s here. It’s my thoughts on a long chain of people thinking, which are most effectively summarised by Amber.

I’m not going to rehash the conversations – the people that have gone before me have done it so much better – but I wanted to have a look at this purely from an identity management perspective. These are the thoughts that I thought:

  • Much of this is, of course, all about identity and how your identity is big business to the services around you. David Kernohan mentioned the ‘user data bubble’ and this is exactly the sort of scenario that IDM folks such as UMA are trying to tackle with their approach to the personal data ecosystem (still makes me shudder as a phrase). I’ve always been impressed with UMA as a technology but sceptical about user take-up and the amount of ‘friction’ involved in having to manage your own personal data to get effective sharing and information filtering the way you want it.
  • If we want to see frictionless sharing, it is likely that we are probably compromising on personal data security and what we call PII (personally identifiable information) release somewhere. This is a fact that is difficult to escape.
  • I think company behaviour and patterns are interesting in this case. Even though Google and Facebook (and hotmail and everyone else) are doing the same thing, the approach taken to ‘personalising’ or ‘filtering’ or ‘advertising’ information to us has been different with each, and that changes perception. Facebook started on paper as a walled garden, an authenticated environment, and we kind of expect the tailored environment of advertising within that space – especially when it’s free. Google on the otherhand is perceived by many as an open environment, even though people are often not aware that they are permanently signed in to Google….so when they start pushing Google+ links or showing too much awareness of our behaviour, it causes concern.
  • I wonder what effect, if any, the changes to cookie regulations will have on the way information is filtered through to us without our awareness? It is exactly the sort of monitoring behaviour the law is designed to prevent, but it is exactly the sort of behaviour the law is badly placed to stop.
  • A lot of the filtering does actually hit the mark – for example Amber really did want to know about Scottish Castles – and even though it can be annoying it’s not something we want gone, perhaps just more under our control. The space accurate filtering of web content is not working out in is the more traditional academic space – the Google Scholar approach is just not taking off. This is something I talked about at the FAM11 event.

I often talk about the phrase ‘if you’re not the customer, you’re the service’ and its boring to keep on trotting on a hackneyed phrase, but it’s that attitude that things like UMA are trying to address. UMA says I may be using your service for free, but you are not buying me, you are not buying my data, and I know what it is worth to you.

So where does this leave us? I’m not sure, but as Amber’s post suggests maybe there is a group of people, the twittering classes, who might be willing and able to embrace the personal data ecosystem and use it to make their filtered, frictionless world a place where they are more comfortable? We’ll just have to wait and see.

You Can’t Sue Unicorns

A lot of people have asked me why you can’t sue Unicorns. Here is the back story. Names have been changed to protect the guilty.


One cold wintry day, two federati are talking about people making unreasonable demands as to what should be included in a policy statement:

Federati1: I want federation operators to supply me with a white horse, pony or donkey so that I can dress it up for my “federations powered by Unicorns” campaign (but I don’t think it will happen).

But it is as useful as all the other suggestions.

Federati2: Now unicorns i’m all for investing in.

Federati1: The Purple Federation would probably find the budget for a pony.

Federati2: we can glue on a horn.

Federati1: That was my plan. It’s only marketing after all. Although they’ve probably got the budget for a real Unicorn.

Federati2: I’m in. As long as I can dress up in a conical hat.

Federati1: Clearly only princes and princesses can ride unicorns. It’s not like you’d let the policy say more than that!

Federati2: no [rude word] way.

Policywonk1: [interrupts] It only says that Princes and Princesses SHOULD ride unicorns. I should be allowed to as well.

Policywonk2.: You have to sign the insurance waiver form in English. It says that it SHOULD be in English – but I think it should be changed to MUST be in English.

Federati1: “Unicorns can’t read – they just want to frolick through the air. It doesn’t matter what language the insurance waiver is in. YOU CAN’T SUE UNICORNS.

Policywonk3: “Because Welsh law still recognises the original KJV of the Bible – unicorns can be sued in Wales

Federati2: So no one objects if I ride my unicorn-pony dressed as a princess as long as I don’t do it Wales? ???

LESSON1: Don’t ride Unicorns in Wales.

LESSON2: Never engage in a battle of wits with someone who writes policy….or Sicilians.

How can we create an identity economy for research and education?

This is the entire transcript of my FAM11 presentation that some of you have been mad enough to ask for. I hope you enjoy or ignore as appropriate! The slides are here if you would like to follow along.

How can we create an identity economy for research and education?

When I was asked to take on the role of UK Access Management Focus, one of the things I was asked to examine was the general state of access and identity management within the UK Research and Education environment. After a year in this job, I find myself asking a simple question:

Do we have an environment in which identity plays a role?

I’m not sure we do as yet.

Many of our conversations around identity at the moment involve the institutional role in provisioning identity vs uptake of social networking identifiers, or to put it more bluntly…should we give students emails if they are already using

However, before we start worrying too much about where a user’s identity is coming from I think we need to start creating an identity economy in research and education. To do this, I think we need to look at 3 different steps:

  1. Legitimise the web as a place for scholarly activity.
  2. Fix the right problems.
  3. Shift from constructing spaces to supporting actions.

I’m going to take a short time today to argue around each of these points, predominantly from the student perspective, and argue that we need to transition ourselves more fully in to the open web before we can start to build a proper identity economy for education.



The problem with the web is that for too long people have not considered it a legitimate space for scholarly research. Its’ not safe, it’s not defined, it’s just not scholarly – I mean ANYONE can write a blog. So we tend to build spaces where we feel more comfortable and that discounts a large amount of the information overload we are faced with. I’d argue that there are 2 ways of achieving this – creating silos or filtering information – and that we tend to pursue the first as a typical approach to education.

The alt tag for this reads: the most exciting new frontier is charting what is already here.

Dave White has been interested in this area for some time, and has been pulling themes of legitimacy in to his work on Visitors and Residents. For those not familiar, Dave uses the terms visitor and resident to refer to the different ways people interact with the web – with visitors dipping in and out of services, and residents more akin to what we consider a digitial native – those that immerse ourselves online.

The Visitors and Residents project is currently undertaking work to look at a student’s motivation for being involved with resources online. It makes some interesting observations that cross over in to the identity space, as shown in this diagram:

Image Scott Room – David White – CC attribution license.

GWR = Google > Wikipedia > References. It’s an approach often adopted by students but one that they feel is illegitimate as a study approach. This can lead to a tension between using the source but not referencing the source due to its illegitimacy: hence creating the learning black market. In other words, it is not in the legitimate learning SPACE.


People often gravitate towards a known ‘space’ on the web, it’s a great comfort blanket and one we often use in R&E: if it’s in this portal, behind this wall, on this list it is ok. Anything else by default is not – it’s part of the learning black market. The urge to define your own perpetuates every discipline, everyone who works in technology will recognise this approach to defining standards:

By creating your own space on the web, you are asserting control and a structure. What do I mean by space? I mean anywhere where I have to learn to visit a certain point to start my scholarly process rather than just opening a web browser. This could be a variety of things – a library portal, channeling through a proxy server, reading lists in a VLE, even a publisher website. I’m not saying any of these approaches are necessarily wrong, but that they should not be the only solutions we explore to enrich the user experience.

We learnt a long time ago that structure is perhaps not the most important part of how we approach our interaction with the web. We began by using HTML as the language of the web, a language that focuses almost entirely on structure – bold, header, italic, paragraph. The limitations of such an approach were soon realized, and XML was developed to help us describe features and content – a more semantically rich approach. This is exactly what we use in the UK federation….it would be pointless for us to send information to another party saying ‘this element is bold’, they need to know which part of the information we are sending is the entityID for any given member of the UK federation:

JISC Monitoring UnitRestricted access to JISC Monitoring Unit data×64.png

I think when we are approaching discovery of scholarly resources, we need to see a similar shift as we have seen from HTML to XML. Because we have complete control over the spaces, we can focus on a structured approach to the way we think about those spaces. Anyone who has worked in the JISC space over the last 10 years or so will be familiar with the concept of ‘discovery verbs’ – i.e. search, find, use. I’d like to see these enhanced by some identity verbs, and I’d argue that the only way to use identity verbs effectively is in a completely open web context, and not in a siloed space. More on that later.


One thing we do really really well within education is find workarounds to problems.

My absolute favourite example of this happened when I first started working for JISC. Our host institution was very suspicious of us as individuals and would not allow us to have admin rights to our laptops. When we campaigned to be allowed rights for a 24 hour period so we could add the printer drivers for our home printers, we were refused. Instead our host BOUGHT US ALL NEW PRINTERS FOR HOME AND INSTALLED THE DRIVERS FOR THESE PRINTERS THEMSELVES. We were also only allowed to put work issued printer cartridges in them…I can’t imagine how much this particular workaround actually ended up costing.

A more recent and relevant example was a request from an institution to help with a provisioning problem. The institution in question was taking a long time to properly register students and provision them with accounts, so there was a gap when students did not have the credentials to access online resources. I was asked if there was a way to create ‘guest accounts’ with Shibboleth to get around this problem.

My response was as follows:

  • It’s up to an institution whether or not they want to create guest accounts within their system, but generally it is bad practice;
    It should take the same time to provision a guest account as to provision a guest account, if not, the IDM system is broken;
  • If a student is not formally registered and provisioned in your system, they aren’t ‘eligible users’ and shouldn’t be using resources;
  • The provisioning process needs fixing, you don’t need to find a library workaround.

My response was not well received – this was not the answer they wanted to hear.

I think it is far to say that because the IT requirements within different areas of an institution are often poorly articulated, there is a ‘no’ culture towards departmental requests for new or changed processes. I also think it is fair to say that there is a tradition of friction between IT departments and libraries in many institutions. This often leads to departments seeking a work around just to make things work for users. I have a lot of sympathy with that.

However, it is clear that if it is taking several weeks to provision a user in to your systems, your process is broken. A recent CSO-Online article sets Average time it takes to provision or de-provision a user as one of the key metrics for a successful IDM system. The full list is interesting and includes things like number of accounts per user and time it takes to approve a change. Have a look at the list and if these things aren’t working for you, you need to fix your IDM system, not try and work around them.


We all know the words associated with spaces. Of course it is always going to be important that we feel we understand, and it some sense have influence over, the spaces in which we are learning and teaching. There is however another way of looking at it.

This is what I meant by the identity verbs I mentioned earlier.

Follow, Share, Tweet, Check-In…and most importantly – LIKE.

With the LIKE button, Facebook realized that it had more power and reach outside of , but needed to take its meme and apply it as a metric or filter on the open web. They may not do it in a way that makes them very popular, but it has undoubtedly been a successful approach.

The ‘like’ approach is about Facebook trying to filter its brand through open web searching to support user interaction with resources. At a very different scale, the recent changes to discovery within the UK Access Management Federation try to achieve a similar vision, although not with the brand of the federation. MDUI allows you to have both the institutional brand and the service provider brand at the right points in the login flow so that a user does not get lost when they get sent to an external service provider. I’d encourage Identity Providers in the room to look at using the new Discovery Service code that means you can include automagically include the SP logo that the user is logging in to on your login page, as per the following examples. I’d also obviously really encourage Service Providers to give us the MDUI information – Service Providers have the most to gain from making use of this feature.

One of the most important lessons we can learn is how we position ourselves in relation to the Internet. Recently, I asked a group of people to draw me a picture of ‘how the see the Internet’. I didn’t tell them why I was asking or what I wanted to do with the information, I just wanted their interpretation of HOW THEY SEE the Internet. These were all information professionals that I would consider to be Residents in Dave White’s definition.

I think it is interesting to compare the first two images with the second two, purely from the perspective of inclusion of self in the picture. It is only in the second of these four images that we see someone who places themselves at the heart of the Internet and how it is working for him.

In a recent Guardian article, Dr Abhay Adhikari argues strongly for an identity driven approach to digital literacy, and says that Universities must rethink their approach to student digital literacy.

“We need to stop digital literacy training that uses the internet and social media to achieve pre-defined outcomes.”

Instead we should teach students to use the internet as a communication tool, noting that:

“Reflection + Internet = Digital Identity”

This is about getting beyond the mechanics of ‘find’ and ‘use’ the tools, but about using the environment to have conversations and to both research and evaluate resources and discussions online. This is the journey towards becoming a resident and towards becoming a mature researcher capable of managing the open web. If we can get to that point, we start to have an identity economy for R&E, and can then evaluate our provisioning role within that environment.


Of course, if we are talking about an economy, we have to add value in that space. What value does an institutional persona hold to me?

Getting access to services I would otherwise be locked out of is quite a negative use of a powerful tool. Citing affiliation is a much more powerful approach. Give me more because I have affiliation.
The UK federation is already beginning to show the power of being able to express ‘studentyness’ to gain access to services. We have student union services, student discount services and student housing services all making use of the assertion of ‘student’ within the UK federation – a more positive use of federated identity that perhaps we are used to seeing.

The need to be able to effectively identity yourself as a researcher is a large-scale problem being investigated by organisations such as NISO, ORCID and VIVO. VIVO in particular shows the importance of being able to openly share institutionally created profiles of authors. These initiatives in turn are starting to feed the use of search engines such as Microsoft Academic Research and Google Scholar.

Neither of the mentioned search approaches from Microsoft or Google have been particularly successful or adopted as a mainstream approach by institutions, hence the adoption of closed discovery services to tackle the academic discovery problem. However if we perhaps put some more time and effort in to the identity side of research and education, could we perhaps help solve this problem?


In this talk I’ve argued for a proper scholarly layer to the internet, filtering information appropriately, supporting affiliation within the search engine and controlled and directed by the identity transactions of our users. We aren’t there yet – Google Scholar has failed to fill this niche effectively – but there are behaviours we can adopt, stop and change to get us closer to this vision.


All links available from this Google Bookmark list:

Adhikari, Adhay. Universities must rethink their approach to student digital literacy. <>. Accessed 10th October 2011.

Villavicencio, Frank. Identity Metrics that Matter. <>

White, David. “The Learning Black Market”: <>. Accessed 10th October 2011.

Student developers

I’ts been a pretty stressful time for me recently as I deal with an unexpected house move, so it takes something really interesting to grab my attention away from boxes and clutter at the weekends. This weekend, DevXS managed to do exactly that.

DevXS was a simple but lovely idea – what would happen if you brought together a bunch of students from across the UK and asked them to code for 24 hours? The answer is, quite a bit! For more on what the students were up to, take a look at the DevXS wiki.

My first reaction as I watched the tweets was – if only I had known! I would have chucked in some sponsorship for access and identity management type developments. My second hot the heels reaction was not to be silly – why would anyone be interested in that? It’s not cool or particularly interesting…it’s backend stuff. Access Management is typically something that is tacked on at the end of developing a service, normally just using the local authentication method in a poorly supported way. Cue depressed look about how we can change the general attitude towards access and identity management.

But, well, why *wouldn’t* a bunch of students developing services for students want to use federated access? Tools like SimpleSAMLPhP in particularly are designed to offer such functionality to lightweight service in a very accessible way. Further more, once implemented it would mean that ANY STUDENT in the UK (or indeed in many different countries worldwide) would be able to use the service without needing to be provisioned, using their local institutional username and password. That’s handing you the entire student population on a plate. Finally, you wouldn’t even need to administrate these accounts or have overhead for forgotten passwords / usernames as this is all done elsewhere. Again, this is very in-keeping with lightweight service developments.

I really wish we could stop making access and identity the last thing we ever think about when developing a service – particularly as it is often the thing that can most affect the experience of a service when done badly. So maybe next year we could challenge some of the developers to use a tool like simpleSAML as they develop. Remember, fairly swift access to the ENTIRE student population for your service…how can that be a bad thing?