Authorisation

You are currently browsing the archive for the Authorisation category.

I think we are all quite accepting now of the fact that ‘the customer is always right’ has been replaced with ‘the computer is always right’. All around the world, companies are struggling to offer even basic customer services (I buy stuff, you send stuff) as confused ‘support’ staff battle to make computers do what we all know is the logical response to ‘buy stuff’ – i.e. send me what i ordered, the way I ordered it, and not something entirely different to a different place at a different time :-) We fight, we shout, we cry, we hit our heads on our desk and sometimes we get lucky and after expending hours of time and energy we actually get what we want. Other times we just give up. I’ve tried billing a company for the time I have spent being my own customer support – yeah, that doesn’t work.

So even if this basic, clearly customer lead environment, we can’t cope with being a customer. It gets even more confusing when our notion of customer is toyed with, thrown on its head and cast as an illusion. There is a simple name for what happens when you don’t understand your role in a transaction – it’s called a grift…or a confidence trick.

Two of my favourite cultural discoveries of the year both deal with grifting – American Gods and Zombieland. Both detail a confidence trick that involves confusing the mark about the value of something and about their ownership of that something – a violin and an engagement ring in turn. In these examples, the actually objects put out to play in the grift are effectively useless. A different approach is required when dealing with an object that has real value to the mark. In these scenarios, simplicity is key.

In American Gods, Wednesday steals significant sums of money from people with three simple techniques: 1) hanging an out of order sign on a deposit box, 2) wearing the uniform of a guard and 3) having a (fake) character witness at the end of the phone. People literally queued up to give away their money.

It’s something that we replicate everyday.

I’ve always said that academic publishing is the biggest confidence trick ever run. I always imagine trying to explain academic publishing to an alien: universities fund people to carry out research, said researchers then give that away to publishers for free, other researchers vet, judge and rate the stuff for free, publishers make it available for a fee….and universities buy it back. It’s Wednesday’s con in a different guise, although the publishers take it further and make you buy your valuables back. Brilliant! I’m not the only person to marvel at this business model.

Another classic con is the process of offering something to someone seemingly for free…as long as you just do x, y and z first. This is the approach used by the 419 scammers, and yes people have fallen for these scams enough times to make it big business. We might laugh at the badly worded emails fronting these scams, but we fall for it in other ways everyday by signing up to the plethora of free services that the social media world has to offer.

“If you’re not the customer, you’re the product” may have already have become a hackneyed phrase…but for me it rings quite true. Services no longer have to sell you something to make money, instead they take something from you that adds value to their offering and sell that concept on to other people: data, status and reputation. if a company can convince you to 1) sign up and give them all of your very valuable information, 2) get you to shout out about the use of that service and 3) get you to build up a profile within the service that will attract offers, confidence job done. All they need then to do is get you to pay an monthly fee for that service at a later date and they match the academic publisher in grifting skill.

I am of course being purposefully cynical here. I use loads of social networking sites and enjoy them immensely. I do it however with my eyes wide open. I’ve been amused lately by the outcry from people at Facebook’s recent changes to the way in which Facebook displays news feeds and they way Facebook uses cookies. ‘Let’s lobby them to change it!’ is not going to help. That’s a customer response. You’re not a customer.

So will we see a mass migration of people over to Google+ / The Next Big Social Thing? For that to work, you are going to need to persuade the things that add value to Facebook for you to move over to…and that’s people, not features. Once there, you can join in another identity battle – the ‘nym’ wars, with Google refusing to allow people to use pseudonyms, but have a really awful way of judging whether something is, or is not, a real name.

Where’s the real fight?

The Open Access movement is of course trying to tackle the academic publishing con. In the identity space, championing the non-open is good….identity should not be an open commodity. People are already trying to fight this cause around the banner of a ‘personal data ecosystem’ (a phrase that makes me shudder, protect my s**t would probably resonate with more people!). You’ll see things like UMA from Kantara, Mydex in the UK and various commerical offerings. The challenge faced by these attempts to take back ownership of personal data? The people giving it away just don’t care enough.

So what can we do about it? In terms of the identity problem, I think you have to either care about it and take responsibility for managing it, or decide you don’t care. If the first, it’s not enough to just move to another platform that you think might be more caring and cuddly about your identity information, you need to engage in something that allows you to actively manage your identity data. It’s a trade off, and will restrict the services you get to use…but is that worth the price?

If the later, then don’t complain if the service isn’t too your liking – you are not the customer, you are the product…you will be assimilated.

Tags: , , ,

I came across this interesting article about OpenID via @foolington from the JISC Logins for Life project. OpenID has naturally hit that point in its life where people are talking about it being dead and failing to achieve its goals – we see this dip with all products and we saw the same criticism of SAML, which I believe is now firmly embedded as a mainstream standard supporting a variety of technologies (including OpenID).

I may of seemed critical in the past of OpenID, but this is simply because people have a habit of trying to compare apples and pears – assuming that things like OpenID will ‘replace’ efforts such as the UK federation. The aims of these two initiatives are entirely different.

I think this article has it wrong about OpenID, mostly because it assumes the primary function of OpenID is authentication. I believe that the strengths of OpenID actually lie in authorisation and identity management / security issues. More on that later.

Firstly lets look at the reasons why the author thinks OpenID fails as an authentication method. These seem to be focused around the user experience and typical user behaviour in relation to authentication. I’d be the first to admit that the user experience of access management needs improving – full stop. This is why we are expending lots of energy on initiatives such as the JISC Publisher Interface Study and the proposed REFEDS Discover Project. The basic concept of OpenID does indeed have you ‘logging in’ with a URL, but most mainstream adoption of the concept use a more traditional username and password set. Why is a URL as username so strange? It may be a bit different, but quite frankly it is no stranger that the trend for using your email address as your username (I have ranted about that as a problematic process before, I will spare you now!).

One of the other points made is that the ‘problem’ of having multiple username and passwords sets is a one time only problem as people always tick the ‘remember me’ button and store this information as a cookie. Well yes, they do – this is one behaviour I have finally managed to break myself of though. This is an argument I often hear against the work of federations as well. Well that is fine if you only ever use one PC, and you will never need to remember that password when you are visiting a friend, or in your local library, or on holiday in an internet cafe and trying to work out how to cope with your cancelled flight. Letting people rely on forgetting their passwords doesn’t seem like a particularly good idea – surely having a reasonably unique login is better than this?

I don’t talk much about security on this blog, simply because their are far wiser people who know much more about it than me (@futureidentity and @josiefraser would be two people to follow if this area interests you). I do get involved in lots of discussions about security and privacy as a natural part of my work. In another timely incident, Josie Fraser posted this piece this morning on the trends of privacy in 2010. This clearly highlights some of the problems of giving up our security to browsers and giving up our identity to providers.

So this gets me back to some of the benefits of the OpenID approach – it puts you in control. You control what information you want to release, you control your identity. OK, a lot of the time this sense of control may be false as providers will insist on consuming all of your information in order to permit you access to service, but it is a start. Having one username and password set has to be better than using the same password on nearly every site – I know I worry about my use of passwords in this way (although I admit I have not really done anything to improve this).

Of course ‘better’ access management will always be hard sell to the end-user and will not be something they naturally run around asking for – but does this mean that OpenID is dead? I don’t think so. Like taking vitamins, not drinking too much and exercising regularly, managing identity online is something most of us know we should do but it quite often feels like too much hassle so we take the line of least resistance and muddle on through. Perhaps OpenID should make it on to my New Year’s resolutions list? I’m always so good at keeping those….

Tags: , , , ,

Just back from the very hectic FAM10 conference and the RAPTOR project board meeting in Cardiff. RAPTOR is looking great by the way – have I mentioned that before? :-)

It’s now time for me to start putting some serious head time in to the future of Shibboleth and where the Shibboleth Consortium takes the software and what form the Shibboleth “Foundation” in whatever form may take.

I was thinking about this last night, and it was particularly relevant that @iandolphin24 chose to tweet at that very moment that JASIG and Sakai are planning a merger. JASIG and Sakai are perhaps THE most successful open source organisations operating in higher education today (although I would have to give a nod to Moodle as well and of course we have to be looking towards Apache in more general terms) and are the models that I would immediately go to when thinking about where next for Shibboleth. There is some great stuff for me to think about here, and some very good people to talk to.

However….

‘Open’ is the word of the moment. Open data, open science, open access, open educational resources – you cannot move without being ‘opened’ in education at the moment. The OER movement is proving to be particularly persuasive for organisations such as HEFCE, with open access still high on many people’s agendas despite its more controversial status. However, when that list is ticked off, open standards and open source are often ignored despite the diligent work that goes on in these areas, such as the good people at OSSWatch and CETIS in the UK. It is still incredibly difficult for an educational institution to include an open source platform or software implementation as a serious contender in a procurement process.

So where am I going to with all of this? Well for me the last couple of years working with JISC and the UK federation, my tagline has been “standards compliant, technology neutral” and I absolutely believe that this should continue to be the ongoing mantra of the UK federation. I also think we have done some good work on persuading people that open standards make good business sense in terms of avoiding the lock-in of the proprietary system. However, if we are really going to position Shibboleth well, it is time for me to think hard about the importance and the business proposition of open-source once again, having left OSS Watch in the hands of better people than me many years ago. I think it is time to get open source back on that ‘open’ list, which is why the JASIG / Sakai news is very very interesting.

Interestingly, life is all about the positioning at the moment. I’m thinking in similar terms for REFEDs, and the place REFEDs has in the world of Kantara, OIX and Identity Commons. More on this very soon I would imagine.

As I mentioned in my talk at FAM10 (when I wasn’t talking about Penguins), I believe that it is important that education has a strong voice in these areas, I believe we have specialist requirements and I believe it is important that we make sure that educational institutions help shape the access and identity management space and don’t just leave it in the hands of the commercial world to tell us what we can have.

So that’s all going to be easy to sort out….right? :-) Better stop messing around with blogs and get to it!

Tags: , , , , ,

I will be spending today at the tf-emc2 meeting in Copenhagen. For those of you not familiar with tf-emc2, safe to say it is the home of the uber-techies involved in middleware stuff in Europe (and indeed beyond, with Ken Klingenstein in the room). I’m in a mellow mood after being charmed with great food and free wine at my hotel last night so will probably be very enthusiastic about everything that comes out of the meeting. Forgive me.

Moving Metadata Around

We start with great promise with a report from Ken on the rather tongue in cheek ‘BEER’ (Bucket of End Entity Registrations). I think the name may change at some point :-) The idea here is that a place is created where people can ‘dump’ entity metadata for general use. The metadata would carry with it the name of the depositor and have some general technical standard requirements, but other than that would not carry with it a strong policy trust framework. It would have a ‘terms of use’ that puts the risk on the consumer in terms of use. This will be a very interesting experiment in pushing the boundaries of trust and trust requirements. Something to keep an eye on, and something I will be looking for volunteers in the UK to play with. Literally, I will be encouraging you to consume BEER!

How Many IdPs?

We moved on to a conversation about why would an institution want to have multiple IdPs? The purists in the room pointed out that it was technically easy to have one IdP and it provided a strong trust argument if there was one authoritative point of access for an institution. The more practical amongst pointed out that many institutions have medical schools, library directories for external users, alumni directories and start-up companies / BCE style organisations that really want to maintain a separate directory and IdP. I think this again reflects the very different political landscapes in the smaller European countries to places like the UK, the US and Australia.

Monitoring, Testing, Proving, Baking

One of the areas that people have been interested in for a long time is the monitoring and diagnostics space. On a practical level in the UK we are making some progress here in terms of baseline statistics through RAPTOR and the JISC Usage Statistics Portal. The more basic line of monitoring problems (downtime) and diagnosing problems still has some way to go, but the librarians who spend a vast amount of time reporting downtime from publishers will appreciate the absolute need in this area.

This is a nut tf-emc2 is very keen to crack. There is some work within the GEANT framework to do this (GN3-JRA3-T2: Federation Lab for those of you who understand the strangeness of GEANT) and there has been some basic work using NAGIOs as a framework but there is no tabled solution at the moment. This is something the UK federation Board discussed at a recent meeting so we are fully aware of the UK need for this.

Definition Requirements

Something we haven’t discussed much in the UK is SCHAC. The reason for this is simple – the UK federation has taken the stance that it will only make recommendations around a small set of attributes using the eduPerson specification. SCHAC takes things a bit further for those interested in a full scope…perhaps people looking at the Identity Management Toolkit and the upcoming call on Identity Management from JISC. This really follows on from previous posts about the problems we are facing we encouraging institutions to populate more attributes. This basically comes down to a disconnect between where the requirement comes from (library, virtual organisation) and the owner of the directory (IT department). This is definitely a problem we need to crack if we want to exploit the potential of federated access and the potential of virtual organisations- but how do we manage this?

The Proxy Question

A bit of a blast from the past when the idea of a centralised proxy service came up today. We looked at this issue at JISC back in March 2008. In some ways, this does make sense for IP only resources, but I have lots of reservations. The more opportunities we give to for people to use ‘the easy way out’ of IP access, the less opportunities we will have for decent access and identity management in the future. We also run the risk of annoying people like EXProxy by destroying their business models. I also think the complexities of trying to manage a central service with up-to-date information is not scalable – certainly in the UK. A couple of NRENs are looking at taking this forward however, so we will keep an eye on this.

Collaborating on Collaborations

I’ve mentioned COManage a couple of times, and it is starting to show some maturity as a platform for managing virtual organisations. There are some great mock-ups on the site and will be of interest to the VRE people I would have thought. A very similar project is COIN from SURFnet.

In Perfect Harmony…

Well the saml2int profile just has to be important in terms of getting us all in line. I hope that we will all be able to move towards this regardless of software (Shibboleth, OpenAthens, Guanxi) but I would encourage you to be asking for it!

Other important work in this area is the Kantara Full Matrix test – as this becomes embedded it should be an excellent way for you to tell if something is *really* SAML compliant as it will come complete with test results and the rubber stamp of approval that we have been reluctant to take on as federations testing software.

For more lightweight testing and as I mentioned earlier, Feide are working on an Automated Testing Tool – essentially a test IdP that emulates http to create real test environments for people installing SAML products. This is still work in progress, but there is a great video available.

Other Stuff

For those of you interested in OAuth, Diego gave an overview of OAuth2lib – a project they are working on to integrate OAuth at PAPI. I won’t say much more as I really am not qualified to talk about OAuth, but will link to the slides when they are made available.

Our very own Logins4Life project had a quick demo – see more on the Logins4Life website.

Tags: , , ,

I’ve spent a lot of time recently asking people if they would be interested in extending the scope of the Terena Server Certificate Service within the UK. Currently, JANET are offering straight server certs and are now looking in to wilcard certs. The final part of the story is personal certs.

The standard answers i get from people within the UK are:

No demand!
Too complex!
Lack of control!
What if the issuer changes?

At #tnc2010 today we heard a great presentation from David Kelsey about the opportunities for using personal certs to properly integrate the grid space. I’ve also heard a lot of great things about the portal for offering personal certs via a federated login currently being presented. From a personal perspective, I’ve always thought that there are great opportunities for personal certs in the e-portfolio space.

So there are use cases. There is a well managed process for getting personal certs from the Comodo service via Terena SCS. There is a lovely bit of development to allow us to issue these certs via a federated login. So lots of people here are quite rightly asking us why we aren’t doing this.

So, UK, why not?

Tags: , ,

This morning at #tnc2010 we kick off with a meeting on eduGain. In the past, I have always assumed that the UK would not be able to participate in eduGain due to its rather onerous entry level – mandatory attributes simply don’t fly in the UK. I still have some concerns, and think the lighterweight metadata aggregation process will be more successful in the longterm but I think we have to step up and at least try to address the requirements. It could still be that we fail.

Ingrid Melve asks why we need a centralised service for eduGain if it is an interfederation model? Valter Nordh paints a picture of a future service operation of 1 FTE – this will still be too heavyweight for some people and raises sustainability questions.

What happens to eduGain when the GEANT project stops? It has to have a home. Policy will need to be addressed and tweaked, and there needs to be a place for this to be agreed. REFEDS might have a role to play here, but it will be essential for the 20.3 FTEs assigned to eduGAIN to spend sometime planning this future.

For me at the moment I’m afraid it is less eduGain but where’s the gain? I’m going to push the appropriate people in the UK to see if we can be full participants but I don’t think the eduGain team, the closed development processes and the lack of future planning are making this a particularly easy case for us to sell.

Tags: , , , ,

I had a very interesting discussion yesterday with a colleague about how it might be possible to make federated access management work for public libraries. As usual, it gets down to the the two basic questions of access management:

  • Who is managing credential information to allow authentication?
  • Who is authorised to access the resource?

I’ll deal with the second question first as it is perhaps the more interesting. I know very little about how public libraries license electronic resources, but I do know that many are underused. To give you an idea of how the extent of information available online at libraries – have a look at Manchester Public Library’s e-resources.

Manchester Public Library currently manages access via library barcode number – i.e. you have to be a member of the library to access that resource. Interestingly, Manchester City Council is actually responsible for the identity management – you get passed to their website to login and then passed on to the resource.

I wonder if the licence for Manchester Public Library is for library members, or is based on some other criteria? The reason that this is an interesting question is that anyone in the UK is entitled to join Manchester Public Library. I can join from my home in Surrey online, and quickly get access to all of those resources. Fantastic for me! Not a great business model for the publishers. The only reason this is not a real issue is because very few people exploit these access paths.

A different model for public libraries may be not to look at licensing for members, but licensing regionally. Pricing is normally agreed based on regional population, but conversely access is offered to members – a set of criteria that does not add up.

So that is authorisation. Now, authentication.

It does make sense for public libraries to look at using FAM. Barcode access processes are often clunky, often insecure and it is yet another system for both libraries and publishers to have to manage.

If public libraries continue to offer access based on membership, the library or a body related to that library would have to run an Identity Provider in a federated access management environment, as they have the membership information. It may be possible for some libraries to make use of the work being undertaken by Local Authorities to provide federated access for schools – but there will still be technical implementation costs.

A more interesting model might be to exploit the planned interfederation between the UK federation and the Government Gateway. This will allow people with a ‘citizen’ credential within the Government Gateway to access resources within the UK federation. If we then assume that these citizen accounts contain some sort of standard location information (i.e. I live or work within the boundaries of Greater Manchester) it would be very easy to authorise all users against a regionally negotiated licence as opposed to a member negotiated licence. This could be achieved with very little expenditure on technical infrastructure by libraries, local authorities or publishers, but would require a change in the way the libraries negotiate licences. That surely has to be an interesting approach to explore?

« Older entries