Roaming about at TIP2013

One of the first sessions I attended that I had a chance of being able to participate in at TIP2013 was the eduroam BOF. Although my involvement in eduroam tends to be restricted to enthusiastic user and slight dabbling in issues of usability and policy, I was happy to attend as the European voice in a room of potential deployers from the Internet2 and APAN communities. Whatever your thoughts on eduroam, being able to walk in to a room 7000 miles from home and have your phone and laptop immediately connect to a network before you are even aware that it has happened is impressive.

Listening to the delegates talk, it was pretty much the old story of everything is the same, everything is different. I’ve tried to capture some of main discussion points below.


When rolling services out globally, there are always going to be local variations, issues and quirks associated with culture, history and environment. Within identity federations, that have grown up in a more distributed fashion to eduroam, I have often commented that the differences in our policy documents and approaches reflects more about our different cultures that our technical competences. Over the years, eduroam has seen a variety of different issues relating to non-conformance or areas of development that have evolved differently in different places, which tends to lead to one thing – confused and disenfranchised users. For the ‘magic’ of eduroam to happen, consistency is ultimately important.

This means that eduroam have taken measures to add a central control to the infrastructure, meaning that anyone signing up has to adhere to a certain set of policy and technical requirements. Such a centralized control mechanism, emanating from Europe, can seem uncomfortable to large US institutions looking to sign-up to eduroam-US and presents challenges for the Global eduroam Governance Committee. A significant problem for the US at the moment is the non-hierarchical nature of .edu domain names…unlike the practice for research and education seen elsewhere in the world. These problems are not insurmountable, but do impact on the support requirements for the US.


One of the things that struck me immediately was the focus on being able to trace and track users, and concerns over indemnification clauses that I don’t see so much in Europe. This is because, as one delegate put it, “we all have our laws that are difficult but we just have to comply with”. In the US, this is the DMCA which places a significant burden on US institutions in terms of their responsibility should there be a copyright breach. It was interesting to note that the room then quickly identified privacy laws as the equivalent pesky problem in Europe, presenting challenges at the other end of the trace the user / protect the user conundrum.


The DMCA means that US institutions have more of a problem with multiple users arriving as anonymous@domain, which is common in the eduroam set up. This is why there is significant interest in the US in CUI, or Chargeable User Identity, which is a mechanism that allows a user to be identified consistently regardless of the device they use – handy for event organisers to view people like me who are logged on at events on a laptop, tablet and phone and different times. Scott Armitage has recently written about this over on the JANET blog. The participants had an interesting conversation about whether CUI should be required for US participants in eduroam – a familiar conversation across all our work as we look to balance compliance and best practice with coverage and usage.


As eduroam becomes more and more successful, demand has grown for it to be implemented in locations outside of research and education. It is quite common for event organisers to implement eduroam at education events run in hotels at the moment, but this is just the tip of the iceberg.

A common request is for eduroam to be supported for travelling users. eduroam on buses, trains and at airports would be of significant benefits to many people. Airport access and beyond is now a reality in Sweden thanks to an experimental arrangement organized by Sunet in partnership with The Cloud. Delegates at the conference also discussed people signing up to open their home broadband up as eduroam hotspots.

Whilst increasing the number of locations that the research and education community can access eduroam may seem a no-brainer, things get slightly more complex when we start talking about allowing non-educational users to act as Identity Providers. eduroam has typically been offered under a scheme of reciprocity – you act as service provider from my users, I’ll do the same for your users. However, if McDonalds wished to provide eduroam at each of its outlets, would eduroam sites be happy to allow McDonalds employees access? The extent of the growth of roaming patterns for eduroam will be an interesting journey.


Like many initiatives, eduroam US needs to be able to support itself moving forward and has to consider a business model and as usual attaching appropriate costs to eduroam whilst trying to expand the coverage is a different balancing trick. An interesting suggestion from one of the delegates was to simply cost the service at State level, and then work to find an appropriate party within a State who is willing to either a) act as a broker to recoup costs locally or b) accept the costs internally due to the benefits it might bring to the State. This is a model we have seen often in play with the NRENS across Europe supporting TERENA services in different ways. This could see, for example, a State University willingly paying the costs for State-wide eduroam and also supporting local colleges and schools to adopt eduroam due to the added benefit of increased eduroam coverage and decreasing the need to support guests locally at the State University. I think this is a really interesting model and one I hope gets explored.


The discussion around business models got me thinking about the roaming parameters of different individuals and the business case for support the could be built up around that as coverage affects different people in different ways.

If I had ever bothered to map the different locations in which I had accessed eduroam, it would be far-flung and global. I’ve accessed eduroam from an island in the middle of Sydney Harbour to a bus in Malaga through many conference facilities, offices and institutions across Europe. Interestingly enough, my requirements are often event driven and have required local NRENS to work with hotels and venues to facilitate eduroam access at a cost. Is this something I’d be willing to pay an acknowledge premium fee for whilst attending events?

Back home, I use eduroam often in the JISC offices as I worked from my personal laptop more often than not. I don’t travel often in the UK anymore, so my roaming parameters are very different from colleagues in JISC who have taken advantage of the availability of eduroam in educational institutions across the country on site visits. This is perhaps the more traditional eduroam profile, where central funding from the UK NREN and local effort is a good fit.

I often work from home, and when I do will often sit in a coffee shop and work for part of the day to break up the monotony. Luckily, there is good coverage of The Cloud in my village, which I get free access to as part of my home broadband deal so increased local coverage probably would not make much difference to me.

However the roaming parameters of a student at my local College would be very different. Even if they did have free deals with home broadband, these accounts are most likely to be used by parents. eduroam in local coffee shops, the local library, sports venues, as well as the reciprocal arrangements with other local colleges and universities could make a real difference to a user within these roaming parameters and attracts different models for funding and support. The density of access within a 30-mile radius for these users presents a different use case and challenge to my 7000 mile radius to attract an equal number of supported access events.

My thanks to Philippe Hanset for running the session and colleagues from APAN and I2 for a different view on an established service.

VAMP 2012: Chair notes on VO Use Cases

Chair notes on VO Use Cases (iv), 12:45 – 14:15 6th September 2012

ELIXIR EGA AAI Pilot – Mikael Linden

ELIXI EGA AAI project is part of the work of the European Bioinformatics Institute (EBI). EBI has specific requirement regarding release of data sets relating to genomes, where permission has not been granted by the individuals involved in the study. This is the focus of the European Genome-phenome Archive (EGA). The data is over 400TB, with over 200,000 samples. EBI is effectively a secure broker for making this information available effectively.

ELIXIR itself is a large umbrella project with this work forming a small element. The EGA AAI work is due to end in April 2013.

EGA currently issues a password to each individual researcher involved. This has created a scenario where usernames are actively shared within each research groups – and these credentials are often used by researchers who have left the project, and whose access should have been revoked. There is a large incentive to stop this common practice given the sensitivity of the data involved.

To support this requirement, the pliot is integrating the EGA web portal with a SAML2 SP. EBI has joined the Haka federation, and the intention is to interfederate using eduGain and Kalmar.

The authorization flow for EGA is complex because of the approval points needed for each individual. There have been many manual steps involved, and the pilot project is trying to automate these processes.

The project has identified 3ways in which the authorization can be expressed:

  1. With the web portal acting as a SAML proxy, injecting an eduPersonEntitlement to the authentication flow. This is the one that has been implemented at the moment using SimpleSAMLphp.
  2. With the web portal acting as a SAML AP attaching eduPersonEntitlement to an attribute query.
  3. Using XACML with the portal (Argus).

The software created for the project will be released under an opensource license.

Discussion from the group was around two themes that will come up repeatedly within the VAMP programme: the need for all the home organizations of the researchers to be participating in identity federations and the reuse and applicability of the software created in the project – could we reduce overheads and development time by facilitating the sharing of software developed to support VO workflows?

EUDAT: Towards a European Collaborative Data Infrastructure – Federated Identity Management and Access Control – Mark van de Sanden

EUDAT is a European Commission programme with a focus on data infrastructure. It has been in operation since 1st October 2011 and will run for 36 months.

Partners involved in the project include:

  • EPOS – European Plate Observatory System – which has a large and complex dataset with distributed data sensors, largescale statistics and a full metadata schema unique to EPOS.
  • CLARIN – Common Language Resources and Technology Infrastructure – CLARIN have a specific problem with having users spread over 300 centers within the EU. CLARIN have been working with Identity Federations for some time to try and find a common solution and experience for their users.
  • ENES – Service for Climate Modeling in Europe – ENES again want to provide a consistent experience for its users and want to be able to work with other climate groups
  • .

  • VPH – Virtual Physiological Human – this project currently has a complex dataset with both structured and unstructured data, and complex environments to work with through typical hospital infrastructure.
  • LifeWatch – Biodiversity Data and Observatories – users within this project can often in place for a short time, and the solution for AAI needs to be flexible, transient and immediate.

EUDAT are looking to provide common services to these varied VO projects, and federated identity is clearly a key component as a enabling service for the data. EUDAT has to work with many different identity domains, including community domains, federated NRENS, existing indfrastructures (EGI, PRACE, eduGain), local institutions, OpenID providers etc. Each of the communities are supported by different technologies including OAuth, OpenID, RADIUS, SAML2, X.509, XACML. EUDAT is keen to distinguish between leveraging IdPs and APs, with community provided APs. EUDAT ask the commomn question, what about homeless and citizen scientists?

Name, Rank and Number

Last week, I chaired the consultation meeting for the EC AAA Study that is being lead by TERENA with a consortium of partner organisations across Europe. The focus of that report is access and identity management for researchers specifically, but a lot of the comments at the meeting are very applicable to federation as a whole. The report from TERENA is not too long and is currently open for consultation, please do feedback to the team if you can.

One of the things that struck me at the meeting was a comment from David Kelsey on the oxymoron of ‘Identity Provider’ as a name. David pointed out that one of the last things that Identity Providers in our community do is provide identity information, and I think this is a very fair point – we are currently sticking to the modern day equivalent of name,rank and number. I don’t have any detailed information on the attribute release policies of members of the UK federation, but I am fairly certain that most do not release much more that ScopedAffiliation (i.e. staff@…, student@…) and TargetedID (an opaque identifier). I think there are several reasons for this:

  • The UK federation rules only specifically mention 4 attributes. These are intended to be a minimum set of attributes to support, but have become by default a maximum.
  • Major concerns about the data protection act make most institutions very reluctant to release any data at all. It is better to do nothing than fall foul of the law.
  • Although there was a real buzz around getting federated access implemented in 2007 – 09, there has not been enough follow up to really exploit the uses that attribute management can be put to. IdM is not being prioritised in the current funding climate within institutions.
  • There are not sufficient tools in place to delegate attribute management and population well across the institution, which is desperately needed for the process to work effectively.
  • The UK has focused on the publisher use case, and publishers are not asking for more complex attributes. There is a catch-22 for other scenarios where researchers, for example, are not using federations because they don’t supply attributes and institutions aren’t providing attributes because they do not see the demand.

There are a couple of efforts under way to try and address this problem and encourage institutions to a) more effectively manage their attribute release policies and b) feel confident releasing attributes to certains groups. One is being lead by the edugain team and is called the ‘Code of Conduct‘. The idea is that Service Providers will be able to self declare that they will abide by a conduct statement when it comes to handling attributes. Compliance with the code will be registered in metadata and the intention is that the presence of this flag will give IdPs more confidence in passing information to the SP. There is a consultation open on this at the moment and edugain would really like to here from Identity Provider organisations in particular.

Another approach is more local to the federation. The idea of ‘SP cateogries’ is that when joining the federation, an SP can ask to be added to a certain type of category described by the federation. This might be, for example, ‘student services’ or ‘scholarly publishing’ or ‘research and scholarship’. The federation would provide some minimal vetting, and on completion would assign the SP to that group. IdPs would be asked to automatically release attributes of a certain type to all members of that group. InCommon are currently piloting this approach.

So will either of these processes work and help us to build a richer attribute economy? The Code of Conduct is a clean approach that has the backing of lawyers involved in the project, and is easily described and actioned in current metadata. However it still requires IdPs to have a separate interaction about the attribute requirements of each and every SP, and I am not sure if there is much incentive for SPs to volunteer to sign up to such an agreement.

Member categories are nice as they would allow a simple way for IdPs to manage attribute release for large groups of SPs, but it will have its limitations in attempts to make the groups manageable. It also introduces a new overhead for the federations and its member SPs at point of registration, and it could be difficult to retrospectively get existing members to sign-up to categories.

I’d be really interested to hear from Identity Providers in the UK as to whether either of these approaches would convince them to provide richer attribute release, what we could do to help faciliate this and any other ideas you might have in this space. I’d also encourage you to reply to both of the consultations I mention in the post as they would love your feedback.

Functional Requirements (new)

In the comments, Andy has rightly pointed out that this post does not identify any functional requirements. So here are just a few to get started:

  • The CERN lead FIM report is explicit that attribute release on a ganular level is essential if the research communities are going to make proper use of federated access. To quote: “Many of use cases identified by the research communities call for personal information to be aggregated with community defined attributes in order to grant access to digital resources and services.”
  • European projects CLARIN, DARIAH and Project Bamboo have all cited limited attribute release as a barrier for them in adopted federated access.
  • JISC Services, such as JUSP have asked institutions to release additional attributes and have been unsuccessful in getting the majority of institutions to achieve this.
  • In discussion with many blogging and wiki platforms, lack of release of email address has been cited as a reluctance to use federated access.

As I do state in the original text for this blog, I feel there is a really large chicken and egg problem here. There are many many services that want richer attribute release but reject access via the UK federation as they don’t believe they wil achive this level of granularity. These organisations therefore don’t join, which leads to an impression that this is not a requirement and therefore institutions do not perceive a need to manage this, which is the point Andy makes below in describingthe two approaches to development. I am however convinced that there is real demand and there are drivers. We actually need to be tackling both schools of thought to provide a service that meets community need.

Scientific Schizophrenia

Now I’ve stammered my way through my TNC plenary session, I’m attending talks on the issues of trying to use federated identity to support science and research.

Bob Jones starts us off by talking about the changing nature of identity. He points out that his first electronic identity was given to him because of his work. This simply isn’t true anymore. Within the research space, the multiplicity of identity, source and ownership is truly complex.

Bob and many others have been working on a paper talking about the issues of using federated identity management for research, which is available from TERENA core.

What do they want:

    • a common policy and trust framework for IDM based on existing structure.
      unique electronic identities authentication in multiple administrative domains, across national boundaries.
      community defined attributes to authorise access to digital resources.
  • The group is making recommendations to as many people as possible including research communites, technology providers and funding agencies. It is interesting to note that the group has highlighted the importance of a risk analysis around the study – focusing on the need to get buy in from security staff within organisations. Other factors include communities that are using data that is so sensitive they have an ethics committe that sets who can have access. Sensitive data is also going to drive the use cases for different assurance profiles.

    One of the points raised in discussion was whether we need an eScience federation of SPs to help us deal with some of these questions. What would the future of our work look like if we moved to such a model?

    The REFEDS group is already considering how best it can support the recommendations in this paper, as is the study team leading the EU study on AAA for scientific data and information resoures.

    Next up is Jim Basney talking about CILogin that is looking at the problem of getting small numbers of researchers from numerous institutions effectively authenticated and authorised using a federated approach. CILogin is using a SAML workflow to mash-up technologies such as Shibboleth, OAuth, X.509 certificates and a whole bunch of other stuff to achieve this. Using campus identities was important as CLLogin do not have the capacity to identity vet, and don’t want to put researchers through separate steps. This places a focus on development of assurance profiles at identity federations for this to work. Currently CILogin is working only with InCommon.

    Jim is starting to look at SAML ECP (Enhanced Client or Proxy) to look at solutions for non-web applications.

    Another thing CILogin is looking at is testing and monitoring to make sure that people don’t get error pages. For researchers who want to get work today, getting stuck at an unhelpful error page is not acceptable.

    Achieving WAYRN

    A while back I talked about the need for a ‘Where Are You Right Now?’ service within the UK federation. I’m pleased to say this work is now complete and ready for you to use.

    Permission to access academic resources is typically either achieved in one of two ways (ignoring some of the more spurious access approaches out there):

    1. Via IP recognition. This approach does not in any way authenticate the user, but the user’s apparent location based on the IP address of the device being used at the time, or;
    2. Via username and password (or other credentials). These enable a individual user to be identified but do not typically carry information about user location in the same transaction.

    Whilst it is of course possible for a resource to check both, it is difficult to make complex or granular decisions based on this information alone as an IP ‘authentication’ is typically an all or nothing binary decision. If you are in that IP range, you get access to everything….if you are not, you get access to nothing.

    The UK federation has recently been exploring use cases where location information is both important AND granular alongside an individual unique authentication. There are some good use cases for this:

    • Resources that can only be used by a named individual when they are in a specific room – such as an exam resource or a highly protected research resource;
    • Walk-in access – where only specific resources are permitted to people who ‘walk-in’ to the library or campus.

    To meet this demand, the UK federation has developed a ‘location assertion’ extension to the Shibboleth software. This can be downloaded and implemented by your IT department. The plugin creates attributes by checking if the IP address of the user agent, at the time of authentication, matches a given range of IP addresses identified
    by “CIDR blocks”, which you will more commonly recognise as IP range figures such as: “”.

    To demonstrate how it works, this is one of the rare examples where showing the metadata can actually help provide clarity. Within the metadata configuration for your IdP, there will be a new section with the id ‘userAgentAttributes’ – like this:

    resolver:DataConnector id=”userAgentAttributes”

    A range of different CIDR blocks can then be cited, for example:

    uadc:Mapping cidrBlock=”″
    uadc:Mapping cidrBlock=”″

    Individual machines can be expressed as entitlement values, for example:

    uadc:Mapping cidrBlock=”″

    For walk-in access, this would mean that you could safely manage a couple of guest accounts for library access on campus and hand them out to walk-in users, knowing that if they attempt to use the resource off-campus the correct location assertion will not be passed.

    The UK federation already has uptake from the schools sector for this extension and would be very interested in feedback and possible use cases from UK HE and FE. We invite you all to download and explore the extension and its possible use.

    So what are the possible drawbacks? The main risk I can see with this is general publisher apathy, which we struggle against all the time. Whilst the education sector is developing more and more sophisticated tools to ensure that the complex terms and conditions of academic licenses are met, publishers are repeatedly failing to make proper use of the technology, meaning that the wrong groups are gaining access to the resources. A classic example of this is publishers who ignore the values expressed in ScopedAffiliation fields (i.e. affiliate, member, student) and grant equal access to those groups. It is easy to imagine such a publisher ignoring the location assertion and making the technology development irrelevant.

    I don’t have any magic answers to how to improve publisher behaviour and engagement, but I do think this extension is a great piece of work designed and developed to address a real community need – which is exactly what the federation should be doing. It’s now over to you to see what we can make of these tools.

    *angle brackets have been harmed during the making of this blog post for formatting reasons. Money has been donated to the society for orphaned angle brackets.

    Launching the Shibboleth Consortium

    Today, the official begging letters asking for funding towards the Shibboleth Consortium started to trickle out. Full (probably unnecessary) disclosure – with one of my many different hats on I act as the Shibboleth Consortium Manager so this is not a disinterested post…but I have had many different and interesting conversations with people around the theme of why we are doing this that I thought would be generally interesting from a service modelling perspective.

    Shibboleth is now a mature product, used in a significant number of organizations worldwide. As a mature product – this presents new challenges to the team. Firstly, it is seen as a Product and people expect a level of support – there are currently more than 1500 on the Shibboleth users list. It also means more maintenance requirements, more reliance on ensuring the standards space is up to scratch and more new feature demands. It also means that it is harder to justify the reliance on the current three funding partners. Overall, now is an appropriate time for Shibboleth to be reviewing its models.

    So what are the issues?

    1. Huh, money? But it’s free!

    This is still strangely one of the first questions I get from people who probably should know better. Of course the software is freely available, but there is always a cost involved in creating it – the service of development (I’ve argued elsewhere that academic publishers could well adopt a similar model rather than paid for content). For most open source products this happens in three ways:

    1. It’s done in spare time by completely dedicated developers. Whilst this is a fantastic way for innovative projects to start, it is difficult to make the decision to rely on such software in production environments. What happens when the enthusiasm runs thin?
    2. Time donated by organisations. This has been the model that Shibboleth has predominantly operated on to-date. There are many challenges here – it can be difficult for developers to justify the use of their time in such a way, ‘big institutional projects’ come crushing down and divert the attention of developers and it can place the burden of financing developing unevenly on a handful of organisations. It is also very difficult to find the perfect balance of the talent you want at an organisation that is willing to release.
    3. Funded developers. The majority of mainstream opensource efforts do rely on formal funding streams to keep their products usable and relevant. There are many models for this: foundations, paid for support, limited releases…but money is found somewhere to make for reliability and resilience.

    Most organisations end up operating a hybrid model, and this is where we will be taking Shibboleth. Whilst asking for direct funds through membership and donations, we still have many developers donating code on their own time and contributions direct from organisations.

    2. Why bother with support?

    Providing support for users (as in people implementing the software, not lost students) is currently where the biggest percentage of developer time is spent…which causes much debate. We do have a good community of practise on the Shibboleth lists and several really good people in the community who will answer queries to the list, but the emphasis for support often falls back on the primary developer. As a project, we seem to be stuck between a rock and a hard place with some people calling for developers to stop providing any support and focus their attention completely on new features, and other people being, well quite rude, as we don’t offer a full service helpdesk. I’m not sure that people are fully aware that the Shibboleth project supports SEVEN different products in its current form – that’s an awful lot of stuff to simply coordinate and manage before we even get to support.

    One solution to this would be to look at providing commercial grade support as a project, but this is something we have held back on. For a start, there are other companies out there already offering this and it would be rude to tread on their toes…although we might call on them for membership donations! Secondly this could cause a conflict with the focus of the project.

    Personally I’m quite happy with the balance of support, maintenance and development we currently have but would love the opportunity to address more of the new features we get requests for. To do this, we really need new blood. To do this, we really need those membership fees.

    3. Why bother, what will I get?

    It’s true it can be a difficult case to sell. If you can pick up the software for free, then why bother offering money back to the project? It is worth reiterating again that we are all totally committed to keeping Shibboleth as an open source project. It’s also true you don’t get all that much for your membership beyond the right to vote for Board Members and listen in on consortium calls (although I am trying to convince people of the value of the Shibboleth Cuddly Griffy Toy for all new members). We’ve attempted to document the membership benefits here and I’ve created a slightly wacky pdf of where the money goes to.

    Its probably easier with this project to try and frame it as what might you NOT get if funding dries up. We’ve had to issue a couple of security advisories this year, and the developers were extremely quick to react to these external dependencies. If the developers aren’t there, the patches won’t happen and we are in risk managment territory. It’s a sobering thought but risk management is a sensible approach for any institution using open source.

    So those are some of the major issues we are discussing and will continue to discuss within the Consortium. I hope you find it generally useful and I really look forward to working with some of you as part of the Consortium membership in the year to come.

    Lightning Talks at #TNC2012

    I’m attending the lightning talks on the first day at #TNC2012. Some of the things we are hearing about (I didn’t get them all, twitter and unicorns are distracting):

    • Mujina: a way of testing your SAML IdP and SP from Surfnet
    • Encryption and Cryptography for Filesender: something I have talked about before from the nice people at AARnet.
    • An Italian pilot to test the use of twitter to support astronomy pedagogy (note Andy McGregor and the Elevator people).
    • A new approach to provisioning with SAML from Yaco, who also wrote the code for PEER.
    • A presentation on Unhosted, another project looking at breaking the cycle of giving data to third party providers. Sounds a lot like UMA – SMART people!
    • Peter on the (federated) TERENA Trusted Cloud Drive project that might be of interest to Eduserv cloud people.
    • Another shiny R&E federation in the form of Tuakiri (New Zealand). I’ve had to register an entity with Tuakiri and can confirm they are a nice federation to join.
    • A useful talk by Scott Rea on managing the risks presented by the recent CA attacks and failure. Definitely worth a look at the Dartmouth College report if you are an IT manager in R&E.
    • Chris Phillips on SCIM beating up SPML. I think it is too early to say if any of these standards will be useful in cloud provisioning, but happy to sit tight wait and see at the moment.
    • Bjarni talking about PageKite, another project starting to question the way the web is being used and whether it achieves the aims of ‘freedom and privacy’. This chimes with the talk I will be giving tomorrow (plug plug).
    • Shooting more moons with a collaboration between JANET and PowerFolder to federate up their product(s)….which is a bit like a private dropbox on acid. So more handwaving towards the cloud folks.

    I’ve been trying to find a common theme to cunningly link all of the talks. It’s difficult but I think most of the speakers were saying:

    • here’s some stuff.
    • we’ve done it for you.
    • it works.
    • go play!

    De/fragmented Collaboration?

    I’ve been thinking a lot around the idea of providing collaborative tools at a national level for education and research recently, spurred by several conversations and the general march of free to use tools proliferating around us on a daily basis. This post is an attempt to bring some of those thoughts and ideas together – I may not be entirely successful! I’m going to pose myself the question, should an organisation like JISC be funding collaboration tools, or is the market saturated? What value can be added?

    We’ve all become so used to having a stready stream of collaborative and multimedia tools and apps provided ‘free’ to our finger tips that we’ve become lazy consumers. I was amused this week at the outcry when Facebook acquired Instagram – the comments reflecting an emotional response to ‘don’t take my tool’ rather than a logical analysis of the fact that services we don’t pay for cannot live forever on Angel investment. (Here I could write another whole post on funding models through Angel investment, crowdsource kick starters, open foundations, national funding and commercial approaches – but I won’t. Phew!) I won’t do the hackneyed ‘if you are not the customer you are the product’ thing, but we do need to be rational about the longevity of services we rely on, but don’t pay for.

    What then is a sensible approach to funding collaborative tools? There is a general lack of interest in paying for a platform – particularly when you can never be sure where you should be, which you should be on, and most importantly know where your potential collaborators are. If there is less interest in buying these tools, does national level funding for research and education make sense?

    There is certainly evidence that we are using social and collaboration tools in the JISC community a lot. This ranges from the everyday on Twitter, hosted blogs in a variety of formats, wiki spaces, poll tools, voting tools, Google Apps, Dropbox, tools to take and manage photos, tools to edit videos…need I go on? The sustainability / reliance question is different in every case – sometimes we are relying on institutionally hosted tools, in other cases we are creating, storing and hosting our stuff on public sites where we are less sure of future service, and indeed service terms like ownership, data protection etc.

    Other academic communities certainly think there is power in nationally provided services, and are frankly doing it a lot better than the UK. The excellent Foodle service (which is far and above Meetomatic in terms of features) and Filesender are obvious examples.

    Another thing that is common for all of these is the need to login. Again the way in which we do this varies with the platform, the host, and its links. Many of the tools use oAuth or oAuth style permissions via Twitter, Facebook and Google credentials. Sometimes we use our professional email address to register, sometimes we use our personal addresses. Generally though, there isn’t much consistency. A question I often get asked is if there is any value in providing an R&E OpenID instance (or instances). I don’t really have an answer to this – I generally ask for the use cases and more information, do researchers, students, staff members want it? What is clear though is that we are mixing and matching our login approaches, which in turn affects the profile or persona we present when we are logged in. Whilst there is an argument to be made that reducing and consolidating the number of credentials used on these services, there is certainly a good argument to be made to supporting a consistent approach to persona across these services.

    I’m wondering if supporting the management of persona (and in turn credentials) is a good argument for providing such services at a national level? Could this be less about what platform but about a better approach to presenting and using academic identity?

    Here I’m talking about something like a mash-up of VIVO, and SSO, and reputation services, and openID concepts, and author (and non author) identifiers. A full on proper identity layer for the R&E community, powered by federated access management via your institution. Is that an achieveable vision? Here are some of my wants around this:

    • I’m sick of uploading the same photo again and again and again in to every new system that wants it from me. Can’t I have a profile that just provides this? Ditto for all my other ‘profile’ data.
    • I want to be able to be very clear about the fact I am presenting my professional profile on this service, and my personal profile on that service. Ideally, I would like to have a link to guidelines about how that profile will be used that can be set by my institution for my professional account (i.e. the social media guidelines we all have) and by me for my personal account.
    • I want to be able to track my activity across all the tools I use for my job – I need some sort of identifier to achieve this.
    • I want this to be moveable across institutions.
    • I want to know my collaborators can provision themselves in to the social and collaborative tools I’m using quickly and easily.

    I could go on, but I don’t want to make this post endless or a use case specification for a non-existent service.

    One of the things that would absolutely have to change is how we think about the importance of identity management within our services. I get endlessly depressed by the number of times I get told ‘oh we are going to sort out the access management stuff in phase 2’. Essential workflows within your services should never be relegated to phase 2. Mark Zuckerberg and Jack Dorsey did not get where they are by thinking of the identity elements of their service as a phase 2 tack-on. We are endlessly shooting oursleves and our users in the foot by rolling out services with random approaches to login, profile and identity management without thinking about where the service sits in the everyday workflow of a user, and how many other times some other site has asked them to login.

    So to get back to my original question, maybe if we could provide a decent, full, comprehensive identity approach to these services there would be value in a national something…but if it was built, would they come? Do researchers, students, staff members at insitutions have any interest in such an approach? What do you think?


    Today I saw this post via @ppetej, which is an interesting take on the Facebook / Instagram / Identity message. It’s the perfect reflection on the difficulty of managing usability, security and privacy – which is the theme of my talk at the rapidly upcoming TNC2012 (gulp). Whereas many people would say that being able to consistently use your Facebook account to provide your digital foortprint – this also means handing over all of our personal information and behaviours to Facebook. So what are the options?

    • Keeping accounts on each and every tool we want to use. This is all fine if you can be smart about it, but the problem is that most people end up using the same username (email) and password combo on all of them. From a security perspective, this is clearly problematic.
    • Accept the rise of big brother and go with the flow. Most sites allow you to log in with Facebook / Twitter / Google now…but certainly not all. There is also the the problem between what the site might accept as a credential and the permissions your credentials carry. There isn’t much point an academic publisher accepting Facebook when Facebook doesn’t give a verifiable statement of institutional affiliation.
    • Work on our personas so that we use the appropriate credentials in the appropriate place, and they reflect who we are in that context.

    Amanda’s piece seems to make some of what I talk about above make sense, considering the management of an professional academic persona separately from a personal one, but identity is a complex area. Can we ever get the flow right so that the user experience is good, the site secure, and the management and use of personal data acceptable to all?