May 21, 2015 IODC1

A guest post from Steven De Costa on the need for Guidelines, Collaborations and Best Practices to accelerate open data adoption in governments.

The challenge for individual agencies to develop data access methods that strike the best balance between data confidentiality, protecting privacy, maintaining data quality and releasing data that’s interpretable, relevant and actually valuable to users, now and in the longer term: that’s a big ask without direction.

Open data is at the core of an ideal data system where we’ll see the integrating of data from an array of sources—private, public and community, business and household, cross‐sectional and longitudinal, survey and administrative, national and sub‐national. We’re at the point now where all the leading pundits agree, this new approach to collecting and sharing data will permit business dynamics as well as G2C and C2G to be enhanced in myriad and even globalised ways, but committing the time and resources to conceptualising, let alone realising those opportunities, is assumed by many to be far too daunting a task. How do we leverage the potential of our data, for our benefit and for the good of all? Opening up data, it is feared, is nothing but a big drain followed by a flurry of downloads in the short term. Without the right holistic forecasting of its possibilities Agencies are handicapped by the one big fear that looms ever perilously.  It’s the sinister mainstay of accountability: the fear of getting it wrong.

There are three key ingredients to negating such fears.

1. High level policies and guidelines need to be created

Analysts need to be engaged to help develop high level policies and guidelines. This could be Whole of Government or, where necessary jurisdictional but it’s important not to leave it in the hands of bureaucrats alone. You have to go big picture. You have to think beyond any immediate impact to service delivery.

We hardly need to make the case anymore. There are a number of evidence based reports on the value of open data now published, not to mention high value data sets, data publishing best practice…. there are case studies galore, and advice across a number of lessons learnt. All of this provides enough evidence for analysts to now do direct work within industry segments and model the expected impact of shifting Government operations at a policy level. Open government and open data are being mandated to varying degrees across many jurisdictions but assistance with econometric modelling is crucial. Perhaps an Open data consultants Panel or at the very least a Multi Use List could provide a good starting point.

2. The public sector needs more digital leaders

Enter, the Digital Transformation Office (DTO). The DTO will become an Executive Agency on 1 July 2015. I have high hopes for the DTO.  Their agenda is all about streamlining service delivery, digitally, with a customer focus. Not specifically on enabling a shared data model, but, the whole thing is underpinned by a drive to empower citizens. In this regard streamlined digital services and open data are congruent pursuits. Also, the startup-esque nature of this new agency could herald the beginnings of a new style of end user focussed bureaucrat. It’s a gamble: introducing yet another office churning out rules is one thing; but stacking it with a  new breed of digital bureaucrats, well that’s virtually unprecedented. But the public service is changing (identifying and meeting head on its own shortcomings in service delivery to this extent is testament to that). We also need to see more digital leaders being employed, trained-up and promoted within other agencies too.

3. Open collaboration for open data

In the current long-term business environment, a highly effective APS institution is one which works effectively with other sectors and jurisdictions, including the private sector, not-for-profits, states and local government.

In 2015 ‘collaboration” for government is no longer just another word for ‘paying lip service’. If you look at the leadership and core skills strategy refresh for 2014-2015 you’ll see that the APS has identified the need to be more effective via nurturing opportunities for collaboration. We are seeing this collaborative mindset come into it’s own already with encouragement from Minister Turnbull to build on Public-Private partnerships and the WoG emphasis we see in programs like data.gov.au and govCMS. The aforementioned DTO is set to further boost this collaboration ethos.

Community and not for profit groups funded by sponsors from industry, research and academia and individuals with domain experience on open data, knowledge systems, community engagement and humanities are also well placed to partner with Government agencies at this time. Similarly as with Private partnerships such groups can save the Government money. One reason why budgets are lowered is simply that the cost of generating innovation within Government is higher than it is in the private sector. The cost of innovation within community groups supported by volunteerism is lower again. So now the opportunity exists to tick that collaboration box while also trimming back expenses via  public-private partnerships with groups like Code for Australia, GovHack teams, Open Australia, ODI, OKAU and many others.

Community and Private sector  involvement across the whole process of releasing data enables that invaluable sense of ownership. When we own something, we want it to be right and we want  to see it thrive. In some cases the government can spend millions to sustain a dataset which is 70% correct. Sharing it, even with that deficiency, empowers the community to fix the data itself. The Government role then shifts to becoming more of a subscriber and contributor rather than the sole  owner.

For open data this longevity is further promised via  supporters continually seeking a range of improvements: the best platforms for sharing, the best Apps, the best forecasting. Ownership will keep the conversation alive.

Next week I will be at the International Open Data Conference (IODC) in Ottawa to moderate a sharing session on data publishing methods. The IODC will be a unique opportunity to collaborate on creating a map of best practices within the area of data publishing.

Find out more about the sharing session here.



May 20, 2015 IODC1

A guest post from Yasodara Cordova, from W3C Brazil, on the Data on the Web Best Practices, and the need to link together policy and technology conversations. 

dwbpwg

The activists who advocate for Open Data, and the Governments that are involved through initiatives like the Open Government Partnership, sometimes seem to be dancing out of step. Maybe this is because we have not reached the tipping point in the open data ecosystem where data is generating both advances in transparency, and a chain of large-scale business innovations and economic growth. Researchers looking at what can stimulate open data ecosystems seek to discover methods and processes that will lead data publishers to provide resources that meet the needs of stakeholders: from developers and businesses, to nonprofit institutions, and even individuals, each of whom have specific demands of data.

On the other hand, there have been many successful efforts towards the opening of data. Governments have opened their data even if they have not yet adopted common standards or followed general guidelines for data publication, proving that there is the momentum behind the opening up of data and the need to show results fast on transparency. But, now that more data is in the open, maybe now is time to start using standards in order to move beyond simple publication and to acquire the velocity of data re-use that everybody involved in transparency and open data activists expects and is aiming for.

Although there are no foregone conclusions to the question of how to accelerate open data ecosystems, there are a set of clear hypothesis that surround open data re-use. In Brazil, for example, the method of using hackathons as “appetizers” has been widely applied in an attempt to show the possible benefits of using open data, its ability to bring transparency and to increase the apps on offer for citizen to use.

In the last year alone there were more than 10 hackathons and challenges around open data in Brazil. Just like the Open Data Day Brazil – that happened at the Calango Hacker Club in Brazil’s capital, many had impressive results. But more than apps, the work of the civic hackers brought up discussions about best practices for publication of Open Data. For example, the W3C Brazil office launched a challenge involving data from the Ministry of Justice, in a partnership with the ministry itself, and the most important outcome of the process was not in the apps created, but was a GitHub based discussion around the quality of the data. Through this conversation, developers were able to clean the data and bring it up to meet key standards, whilst also sharing the results of their work as a foundation for others to build upon. These examples, amongst others point to the importance of offering of data using international standards in order to enable greater data re-use, and meet not just the data publication, but also the data use objectives of initiatives like the OGP. Working towards the adoption of standards and best practices around open data is something as a community we need to focus on.

Based on this premise, W3C launched a Working group in 2013 to work on Best Practices for Data on the Web. Since then, the group has explored many data publishing challenges based around a set of use cases collected during the first phase of the working group.

This use cases were important to identify and select the priority challenges for effective publication of data on the web. These challenges are described in the picture below, and are each connected with particular technical aspects. In response to each challenge the working group has put forward best practices that are still in development and open for discussion in the W3C forums.

The first draft of the Best Practices Document has a rough translation to portuguese that can be accessed here. The work of the CSV on the Web Working Group it’s also related with supporting the publication and use of Open Data on the Web.

The frontier between Public Data and Open Data: do we have to address that?

Although the recommendations made by the DWBP group focus on technical aspects of open data, it’s important to note that political issues are also always present and tightly connected with best practices. Discussions over data licensing, privacy and security, for example, have  an important  place at the policy table, and can’t be thrown away or left aside as technical issues alone when talking in fora like the Open Government Partnership.

Thus, to develop best practices effectively we need to formulate many questions:

  • where are the  frontiers between technical actions and policy responsibilities?
  • Can we navigate these blurred lines by establishing technical guidelines written in bills and laws?
  • How far are best practices for data on the web an open data issue, or how far do they also relate to the growing field of  data mining from public data, where new techniques are being used to detect and address questions concerning resources management in cities and different problematic scenarios, like diseases spreading for example? Do these represents fields where data best practices need to be discussed in fora like the  OGP?

Standardising technologies and methods for sharing data has a vital role in the triggering of the whole cycle of Open Data, because it has as its consequence the greater interoperability and reusability of data. Datasets that are machine readable are ready to be used by developers and consumed by services and applications in a faster and efficient way. Also, the methods adopted by agencies that works with public data and the methods used to release this data can be addressed as issues to be solved, which can be an important step to unlock the power of the Open Data as a tool for transparency and accountability.

The big question is how we set, and continue to develop, these best practices, and how we get both technology and policy groups involved in a shared debate.



May 8, 2015 IODC0

A guest post from James McKinney of Open North sharing research into open the challenges and opportunities for data standards best practice. 

Open data is now on the agenda of most OGP governments and of many sub-national governments. However, its success is inconsistent: many citizens use open data for public transit through mobile apps, but many other open datasets are unused, or even undiscovered, by the public.

Among other barriers to the widespread use of open data, the lack of standardization is frequently mentioned in technical circles; our first report identifies several of the gaps in standardization across OGP members’ data catalogs. Few argue against standardization in principle – given that it generally increases predictability, efficiency and interoperability. We therefore designed a study to understand the reasons for the lack of standardization in practice.

Many recommendations and best practices for open data assume ideal conditions: no resource limits, high technical expertise, modern technical infrastructure, strong legislative and legal support, an “open by default” culture, etc. The goal of our study was to arrive at recommendations that took into account the real conditions of government’s capacities and users’ needs for open data.

In collaboration with Iniciativa Latinoamericana por los Datos Abiertos (ILDA), Open North interviewed governments and civil society organizations in 10 low- and middle-income countries to get a sense of their progress, challenges, needs and interests with respect to open data standards. After reviewing the previously identified gaps in standardization in light of the interview results, we proposed 32 draft recommendations, which we now invite stakeholders to discuss, comment on, and eventually implement.

The recommendations are intended to be accessible to a wide range of implementers, while delivering the maximum impact to data users. The recommendations are not intended to describe an ideal world scenario, but rather an achievable, real scenario. These recommendations are also intended to provide clear and specific guidance, in order to limit the additional effort that implementers must commit to understand, evaluate and implement a recommendation. To the extent that the current recommendations do not yet meet our intentions, we are working to improve them over the coming months in consultation with stakeholders.

This work fits into the work plan of the Standards stream of the OGP’s Open Data Working Group, which Open North co-leads with Data.gov. The final deliverable of the Standards stream, due late this year, is an OGP document describing final recommendations with respect to open data standards, which OGP members may use to inform new commitments in their national action plans.

The recommendations share several other important features.

Setting priority

Each recommendation is given a priority of “highly recommended,” “recommended” or “nice-to-have.” The highly recommended targets are intended to be the baseline that open data initiatives must meet to ensure minimum accessibility to data. The recommended targets are important but non-essential ways of making data easier for consumers to access and use.

The “nice-to-have” priority flags recommendations that should not divert attention away from the highly recommended and recommended targets. For example, while API access, URL patterns, and linked data are frequently discussed among open data experts, our study supports other targets as having higher priority.

Cumulative targets

The recommendations are cumulative; all recommendations can be implemented simultaneously. Furthermore, a government should maintain the highly recommended targets while it pursues the recommended and nice-to-have targets.

This approach differs from the 5 star open data model, in which each star requires replacing the prior implementation with a new implementation: for example, moving from publishing budgets as PDF documents (★), to Excel spreadsheets (★★), to CSV files (★★★), to RDF serializations (★★★★). In such a model, governments must decide which level to target, in order to avoid the cost of reimplementation. Our approach avoids presenting such decisions.

Assigning responsibility

While governments are ultimately responsible for the quality of their open data initiatives, in designing the recommendations, we recognize that governments are not solely responsible: software vendors – whether they provide data catalog software or other data management and authoring tools – have a responsibility to better implement and support standards. To the extent that governments are currently limited in their ability to implement recommendations due to the unavailability of appropriate tools, several recommendations target software vendors for implementation. Solving a problem at the vendor-level may also be less demanding that at the government-level.

The work ahead

As we identify in the conclusion of our report, a major challenge for governments working to improve the quality of their open data initiatives is the lack of tools to measure their performance. We therefore propose creating self-assessment tools for governments to measure their implementation of the recommendations. These tools would support the adoption of the recommendations, which in turn would achieve easier access to, and use of, open data. The tools have other important benefits:

  • Rapid, quantitative feedback would accelerate cycles of improvement.
  • Automated measurement would be lower cost and would be more comprehensive than manual measurement, producing more consistent implementations.
  • The experience of implementing these tools would inform the implementation guidance for the recommendations towards being more actionable.

The implementation of several recommendations can be programmatically evaluated, making these tools possible. The interviews in our study established that governments are willing to adopt standards; but more work like this proposal is needed to ensure governments are also able.

In the meantime, we encourage everyone interested in improving the quality of open data to read our report and give feedback on the draft recommendations, which will be formalized in an OGP document toward the end of this year.

View the report to add your feedback, or share your general comments in the thread below…


Use of cookies

This site uses cookies in order to improve your user experience. By continuing to use the site, you are agreeing to the use of cookies and accepting our cookies policy. .

ACEPTAR