CPS - Our Data, Big IT - another missed opportunity
The entry of the Centre for Policy Studies (CPS) into the "free"/"open" debate is not surprising (http://www.cps.org.uk/cps_catalog/it). It feels though like another missed opportunity to move the debate on hampered as it is by political point scoring.
Varney has it that the government needs to hold “...a ‘deep truth’ about the citizen based on their behaviour, experiences, beliefs, needs and rights”; the CPS report argues for something they call "Government Relationship Management" at whose heart lies choice in the location of your personal data and access to it based on standards (and, not mentioned, rights). Whether or not data "belongs" to the individual, that data should be exchanged using open standards - the web services and metadata chestnut. Hence my interest.
The report focuses on cost, ownership and security and that the solutions to this lie with a change to the model. From the perspective of opening up access the report actually offers little in the way of new ideas above and beyond the central thesis that who holds our data should be our choice.
Of course many private sector entities hold data about each of us. Are they any more trusted (fallible humans account for most mistakes) or cost effective (consider the long run case for PPP/PFI and that government will have to run the same infrastructure anyway before answering!)?
Innovation with new tools and technologies is generally far more rapid in the private sector and there are good lessons and cost savings assuredly for government IT regarding all manner of these from the cloud, to SOA, APIs, usability etc.
The report bemoans data replication (as it is easy to do from an armchair) but storage costs continue to tumble and techniques such as automated de-duping provide further savings. It also bemoans data sharing which is in part the flip side of the same coin. This is not where major cost savings lie either and suggestions of 50% are “vague” anyway admits the report?
But cost savings there can be for sure; mechanisms that drive value to the citizen and ease service provision should be at the core of the debate not point scoring phraseology and impossible to substantiate claims. So where might that value derive?
The CPS report rightly endorses (increasing) adoption of SOA and the cloud to spur efficiencies and the big providers are on this already. We are beginning to see ‘vertical market’ and ‘localised’ interfaces for a range of ‘use cases’ from citizen through service provider to analyst. Private sector experience is that these approaches deliver significant downstream IT savings whilst embracing outsourcing, providing greater flexibility and speed to market for service providers. At their heart lies adoption of authentication and authorisation standards and technologies that deliver rights based access to data and functionality depending on the user and other factors. All power to that elbow as it deals with both the replication and sharing arguments when implemented correctly.
Government would point out it needs access to a consistent data set so that government's (presumably outsourced) analysts could compare apples with apples - without a level playing field what hope for the postcode lottery? A fertile imagination will see risks in an online privatised ID!
Like other recent reports (including POIT, UK Location Strategy) those close to Whitehall (which policy wonks, analysts, civil servants etc inevitably are) sometimes appear reluctant to recognise what distributed architectures bring to this debate - it doesn't actually matter where the data is held as long as it can be discovered, accessed, exchanged and so on.
There is of course a debate to be had about what data should be held, who should collect it, who should have access to it, where it might be stored, what rights management apply and in what circumstances and so on.
However, what has got lost in the wash is some definition or acknowledgement that what is in essence under discussion is ‘data for the “public good”’, be it through aggregation or for the individual citizen.
Under this world view ‘public’ would be defined as discoverable or searchable and to be those things you need openness, interoperability, web services and above all a mechanism that integrates authentication and authorisation into the solution via the construct of metadata and rights management. Mandating metadata capture and discoverability (publishing) would provide much of the enabling framework and dissipate the faux concern over whose data it is.
It is easy and correct to point the finger at ineffective and poor value government IT projects (I'll give you some less familiar - £50m for RPA's SPS so far, £7.2m for planningportal architecture alone over 3 years). But to intimate that a vague and unpalatable solution offers some panacea for these failings is an incoherent leap based on a narrow philosophical outlook and narrow technical thinking. The promise of the distributed discoverable semantic web fits far better with the sought after vision but has been mostly missed.
ps Attracting advertising spend requires the advertising portals (sorry, search engines) to harvest ever more granular data about their users in order to 'segment' and then 'target' the adverts accordingly to garner the greatest revenues. They seek a ‘deep truth’ about the citizen based on their behaviour, experiences, beliefs and needs and how to get click-throughs for advertisers.
Sounds suspiciously familiar no? Only difference is the absence of rights - a consent easily given and hard to wrest back – the government has been notable for its ‘light touch’ regulatory environment with weak regulatory, governance and compliance - you would likely be astonished at the permissions you have given the business to whom you have in effect licensed yourself. ‘Minority Report’ was an exemplar to Dubya not the savage warning that Mr Dick intended.
Varney has it that the government needs to hold “...a ‘deep truth’ about the citizen based on their behaviour, experiences, beliefs, needs and rights”; the CPS report argues for something they call "Government Relationship Management" at whose heart lies choice in the location of your personal data and access to it based on standards (and, not mentioned, rights). Whether or not data "belongs" to the individual, that data should be exchanged using open standards - the web services and metadata chestnut. Hence my interest.
The report focuses on cost, ownership and security and that the solutions to this lie with a change to the model. From the perspective of opening up access the report actually offers little in the way of new ideas above and beyond the central thesis that who holds our data should be our choice.
Of course many private sector entities hold data about each of us. Are they any more trusted (fallible humans account for most mistakes) or cost effective (consider the long run case for PPP/PFI and that government will have to run the same infrastructure anyway before answering!)?
Innovation with new tools and technologies is generally far more rapid in the private sector and there are good lessons and cost savings assuredly for government IT regarding all manner of these from the cloud, to SOA, APIs, usability etc.
The report bemoans data replication (as it is easy to do from an armchair) but storage costs continue to tumble and techniques such as automated de-duping provide further savings. It also bemoans data sharing which is in part the flip side of the same coin. This is not where major cost savings lie either and suggestions of 50% are “vague” anyway admits the report?
But cost savings there can be for sure; mechanisms that drive value to the citizen and ease service provision should be at the core of the debate not point scoring phraseology and impossible to substantiate claims. So where might that value derive?
The CPS report rightly endorses (increasing) adoption of SOA and the cloud to spur efficiencies and the big providers are on this already. We are beginning to see ‘vertical market’ and ‘localised’ interfaces for a range of ‘use cases’ from citizen through service provider to analyst. Private sector experience is that these approaches deliver significant downstream IT savings whilst embracing outsourcing, providing greater flexibility and speed to market for service providers. At their heart lies adoption of authentication and authorisation standards and technologies that deliver rights based access to data and functionality depending on the user and other factors. All power to that elbow as it deals with both the replication and sharing arguments when implemented correctly.
Government would point out it needs access to a consistent data set so that government's (presumably outsourced) analysts could compare apples with apples - without a level playing field what hope for the postcode lottery? A fertile imagination will see risks in an online privatised ID!
Like other recent reports (including POIT, UK Location Strategy) those close to Whitehall (which policy wonks, analysts, civil servants etc inevitably are) sometimes appear reluctant to recognise what distributed architectures bring to this debate - it doesn't actually matter where the data is held as long as it can be discovered, accessed, exchanged and so on.
There is of course a debate to be had about what data should be held, who should collect it, who should have access to it, where it might be stored, what rights management apply and in what circumstances and so on.
However, what has got lost in the wash is some definition or acknowledgement that what is in essence under discussion is ‘data for the “public good”’, be it through aggregation or for the individual citizen.
Under this world view ‘public’ would be defined as discoverable or searchable and to be those things you need openness, interoperability, web services and above all a mechanism that integrates authentication and authorisation into the solution via the construct of metadata and rights management. Mandating metadata capture and discoverability (publishing) would provide much of the enabling framework and dissipate the faux concern over whose data it is.
It is easy and correct to point the finger at ineffective and poor value government IT projects (I'll give you some less familiar - £50m for RPA's SPS so far, £7.2m for planningportal architecture alone over 3 years). But to intimate that a vague and unpalatable solution offers some panacea for these failings is an incoherent leap based on a narrow philosophical outlook and narrow technical thinking. The promise of the distributed discoverable semantic web fits far better with the sought after vision but has been mostly missed.
ps Attracting advertising spend requires the advertising portals (sorry, search engines) to harvest ever more granular data about their users in order to 'segment' and then 'target' the adverts accordingly to garner the greatest revenues. They seek a ‘deep truth’ about the citizen based on their behaviour, experiences, beliefs and needs and how to get click-throughs for advertisers.
Sounds suspiciously familiar no? Only difference is the absence of rights - a consent easily given and hard to wrest back – the government has been notable for its ‘light touch’ regulatory environment with weak regulatory, governance and compliance - you would likely be astonished at the permissions you have given the business to whom you have in effect licensed yourself. ‘Minority Report’ was an exemplar to Dubya not the savage warning that Mr Dick intended.