October 23, 2015 Mor Rubinstein

Guest post from Mor Rubinstein

Mor Rubinstein is the community coordinator and the research lead of the Global Open Data Index, a civil society audit of open data publishing, at Open Knowledge. She is also a hacktivist at The Public Knowledge Workshop, an Israeli NGO that builds applications for social change using open data. She holds an MSc in Social Science of the Internet from the Oxford Internet Institute where she is still doing some occasional research such as the data4policy project.

For some people, the word “measurement” can trigger scary thoughts. It can involve numbers, rankings, and complicated formulas. It involves looking into data that we sometimes do not want face, just like looking at the balance of your bank account at the end of the month before payment day – it may not always be as positive as we think. So why do we want to look at the data about open data in the face? What are we trying to get out of it?

The answer is simple, we try to see what our action did, how did we perform, and hopefully to learn from it how we can get better. It requires a lot of work, but it is an investment for our future work. The question then should not be why we measure our open data efforts, but even simpler than that – what do we want to measure? How can we measure effectively so we can work better in the future?

In the last International Open Data Conference (IODC) that took place in May 2015, we set up an open data measurement track. We tried in our workshop to tackle these four subjects:

  1. develop an open data assessment roadmap;
  2. refine the open data common assessment framework;
  3. grow the network of researchers; and
  4. develop domain specific assessments: starting with national statistics.

In order to start our first steps in creating a measurement roadmap, we need to understand what we want to measure and how. Great initiatives such as the Open Data Charter and the Sustainable Development Goals are looking at open data as a tool to measure and solve big policy issues. Therefore, it is a key to understand what open data we have, what we need in terms of quality and quantity, and how we can use it. We need to understand what we need to survey and examine so we can work better with this open data.

Now, it is a good moment to stop and assess what we are doing. In short – assessing the assessment. Are we assessing and measuring the right things when it comes to open data? What are the needs when it comes to community and where are our blind spots? Looking at these questions will help us to have a better understanding on measurements, and how to go forward. In the OGP summit next week, we will open these questions to you, the community. We hope that by gathering answers about these needs we can better map our the road to IODC 2016 in Madrid, Spain.

If you are interested in those topics and you are not afraid to look at the data in the eyes, come and join us to our workshop at the OGP conference on Wednesday October 28 at 12:30pm. The workshop that will be facilitated by Carlos Iglesias (The Open Data Barometer, Web Foundation), Barbara Ubaldi (OECD), and myself. We will explore the topics above and will try to understand what to we really want to measure. Come and join us!



October 21, 2015 Bill Anderson

Guest post from Bill Anderson

Bill is an information architect with Development Initiatives, leading their work on the data revolution for sustainable development data and joined-up data standards. He is also the technical lead for the International Aid Transparency Initiative and the coordinator for the newly-formed Joined-up Data Alliance. He was part of the drafting team for the Africa Data Consensus.

The Open Data movement is roughly ten years old. Starting in California in 2007 a number of attempts have been made to enshrine the principles of open data. I think it is fair comment that most of these charters have been drafted by and for constituencies in North America and Europe and have focused primarily on the relationship between western liberal governments and economies and the technically literate sections of their citizenry. Until now.

The international Open Data Charter launched in New York last month is the first globally inclusive and comprehensive manifesto of its type. It recognizes the north-south digital divide. It recognizes the challenges and needs facing developing countries. It embraces the data revolution for sustainable development data. As Jose Alonso from the Web Foundation has argued, it provides a canonical open data standard that can be embedded, adopted, and implemented by other related initiatives without duplication or fragmentation.

Of equal importance is the inclusion of the principle of “Comparability and Interoperability” –  the first time that a charter like this has moved beyond transparency to properly consider the whole point of open data: usage. Most datasets are pretty meaningless on their own. It is only when they are combined and contextualized do they generally make sense. Joined-up data creates the information we need.

You can only join up datasets if they are comparable and “speak the same language”. The Charter recognizes “that in order to be most effective and useful, data should be easy to compare within and between sectors, across geographic locations, and over time”.

And if you wish machines to do the joining up for you, the systems, formats, and definitions used by each dataset need to be interoperable: “data should be presented in structured and standardized formats to support interoperability, traceability, and effective reuse”.

Most of the useful data that we need today – even that which is already open – resides in a massive Tower of Babel. Global institutions – the UN, OECD, World Bank, and IMF among them – maintain countless incompatible and competing standards covering financial, administrative, geographic, and socio-economic data. So do national governments. Most open data portals are silos in which thousands of datasets sit in isolation from each other, each speaking their own distinct language or dialect. Continuing this language metaphor, there are two solutions to the problem: either agree to speak the same language or get an interpreter.

Principle four of the Charter first calls for “increased interoperability between existing international standards” – making sure we can translate accurately between standards. It then demands that we “support the creation of common, global data standards where they do not already exist and ensure that any new data standards we create are, to the greatest extent possible, interoperable with existing standards” – in other words: standards that speak the same language.

The importance of the Charter is that it provides us with a platform from which to launch a two-pronged attack on data silos.

We need to engage with all global and regional standard-setting institutions

  • to persuade them of the importance of joined-up standards;
  • to get them to cooperate with each other on resolving incompatibilities; and
  • to get them to agree that all future standards will be properly joined-up.

And we need to stimulate the market to build more translation tools that will enable us to generate more usable and accessible information from existing data.



October 19, 2015 Tim Davies

Guest post from Tim Davies 

Tim Davies is co-founder of Open Data Services Co-operative, and is a social researcher exploring open government and the civic implications of Internet technologies.

A few weeks back the International Open Data Charter was officially launched on the fringes of the United Nations General Assembly Meeting in New York. The Charter’s Principles, which were first previewed at the IODC15 conference in Ottawa, have been finalized over the last few months by a global network of stewards from government and civil society.

One of the final additions to the Charter was a new top-level principle on ensuring that open data is “Comparable and Interoperable”.

“Principle 4 – Comparable and Interoperable

We recognize that in order to be most effective and useful, data should be easy to compare within and between sectors, across geographic locations, and over time.

We recognize that data should be presented in structured and standardized formats to support interoperability, traceability, and effective reuse.

We will:

  1. Implement consistent, open standards related to data formats, interoperability, structure, and common identifiers when collecting and publishing data;
  2. Ensure that open datasets include consistent core metadata and are made available in human- and machine-readable formats;
  3. Ensure that data is fully described, that all documentation accompanying data is write in clear, plain language, and that data users have sufficient information to understand the source, strengths, weaknesses, and analytical limitations of the data;
  4. Engage with domestic and international standards bodies and other standard setting initiatives to encourage increased interoperability between existing international standards, support the creation of common, global data standards where they do not already exist, and ensure that any new data standards we create are, to the greatest extent possible, interoperable with existing standards; and
  5. Map local standards and identifiers to emerging globally agreed standards and share the results publicly.”

This principle picks up one of the key themes to also come out from IODC 2015: the need to go beyond the publication of datasets, to also consider how data can be made re-usable across different countries and contexts—delivering data that can be joined up.

Often, this is treated solely as a conversation about technical standards, schema, and field names. But, by focussing on interoperability, the Charter taps into a much richer framework. Making data interoperable is not about agreeing a lowest-common-denominator set of fields that everyone must use for their data. Instead, it involves identifying the points of connection between different datasets, working to develop shared understanding of definitions, and building common approaches and identifier sets that make it easier to connect up diverse data. At the same time, it involves recognizing that no single schema should be imposed as the one true representation of reality. Oftentimes, interoperability requires not just standards, but also tools and processes to bridge and translate between data that was generated within different cultural and political contexts.

Mapping out the Road to Spain- Standards_1_Image

Concrete user needs must be at the heart of a global agenda for open data interoperability. Whether it is anti-corruption use cases that demand the ability to trace financial flows across borders, or use cases focusing on improving trade in agricultural inputs and outputs to increase sustainability and address climate change, each instance of intended data use highlights particular features that will be needed to provide joined up and interoperable data. Perfect interoperability is an impossibility; nor is it desirable. That means our efforts for interoperability must be explicitly intentional.

One live example we’re working on at Open Data Services, the workers co-operative I co-founded earlier this year to focus on supporting the implementation of open data projects, involves the creation of a database of oil, gas, and mining projects around the world with the Natural Resource Governance Institute (NRGI). Legislation around the world increasingly demands project-level reporting by extractives companies—yet definitions of projects, and the categories of payments or reserves to be disclosed vary across jurisdictions. Creating a data model and identifier schemes for this data involves considering the different users who want to know about extractives, and thinking about the different approaches, from regulation to voluntary consensus, through which shared approaches to identifiers and definitions can be developed.

As the conversation on interoperability moves forward over the coming year, we need to emphasize the importance of a human-scale and inclusive conversation. Whilst the vision of interoperable and comparable data involves the creation of ambitious new global open data infrastructures, it is vital that we ensure these infrastructures are not just shaped by one or two interested parties. Issue by issue, theme by theme, we will need to identify the stakeholders, explore priority datasets and user needs, and then negotiate the balance between standard development, tool building, and translation: critically aware of who is included and excluded from the conversation.

Through the work of the nascent Joined Up Data Alliance, the development of sector-specific packages for the resource centre of the International Open Data Charter, the conversations around a Global Partnership for Sustainable Development Data, as well as the work of thematic open data networks such as GODAN, there are many opportunities in the year ahead to develop these conversations. The energy is there: although also the risk that initiatives themselves will not be joined up. Perhaps our first challenge then is to find the right ways to build interoperability between these networks of people, to make sure we will be able to build interoperable data.

Hopefully by the time the open data policy and practitioner communities meet together again at IODC 2016 in Spain we’ll have some solid methods, approaches, and progress to build upon.

 



October 19, 2015 Heather McIntosh

By Heather McIntosh

The 3rd International Open Data Conference (IODC) held in Ottawa, Canada on May 28 and 29, 2015 convened over 1,000 participants to build a collective agenda for the international open data community.

In the days immediately following the event, our colleagues in the Open Data for Development Program created the IODC 2015 Report, which combines a summary of conference outcomes with a vision for our community’s work over the coming year.

The final section of the report offers a Roadmap for the open data field—specifically for our work between this year’s conference in Ottawa and the 4th International Open Data Conference in Madrid, Spain in 2016. This framework is also informed by the emerging and complementary efforts of the Open Data Charter, the Joined Up Data Alliance, and the Global Partnership for Sustainable Development Data.

Drawing on the 2015 conference sessions and the rich array of pre- and post-conference events, our Roadmap is organized into five Action Areas:

  1. Open Data Charter—to deliver shared principles for open data
  2. Standards—to develop and adopt good practices and standards for data publication
  3. Skills and Learning—to build the capacity to produce and use open data effectively
  4. Problem Solving—to strengthen networks of open data innovation
  5. Measurement—to adopt common measurement and evaluation tools

Starting this week, the IODC and several esteemed contributors will be publishing a series of blogs, “Mapping out the Road to Spain”. With these articles, we will showcase insights from open data leaders and explore perspectives on these five Action Areas and how they will inform the planning for IODC 2016.

We seek to stimulate discussion and debate around the IODC roadmap and we are seeking your help in this effort: your reflections on the  discussions to come, and the evolving Roadmap, will help to inform the growing open data community and guide our work and our colleagues’ as we prepare for IODC 2016 next October.

The first blog in the series, Seeking an Inclusive Conversation on Interoperability by Tim Davies is now live. We encourage you to share your reactions on Twitter using hashtag #IODCroadmap. If you or your organization publish blogs or new material that adds to this international dialogue, please let us know so we can share your work and widen the discussion.



August 11, 2015 IODC

Alexander Howard,rédacteur principal de la rubrique Technologie et société du Huffington Post, a interviewé James Fletcher, ministre de la Fonction publique de Sainte-Lucie (Développement durable, énergie, science et technologie) lors de la troisième conférence internationale sur les données ouvertes. Bien que brève, la discussion a porté sur certaines questions importantes en matière de données ouvertes et d’amélioration de la gouvernance.

En voici quelques points saillants :

Les données ouvertes permettent de renforcer la démocratie et d’améliorer la gouvernance. Les données ouvertes permettent aux citoyens de savoir ce que fait le gouvernement, ce qui leur permet d’évaluer ses activités. Les gouvernements et les citoyens peuvent donc avoir des « conversations éclairées » grâce aux données ouvertes, favorisant ainsi la participation des deux parties à la prise de décisions et à l’établissement de politiques.

L’échange de données ouvertes entre le gouvernement et les citoyens doit se faire dans les deux directions. Même si le gouvernement joue un rôle central dans la diffusion de données ouvertes comme moyen d’améliorer la démocratie et la gouvernance, M. Fletcher affirme que les données ouvertes ne servent pas seulement à rendre disponibles les données du gouvernement. Elles permettent aussi à la société civile de partager ses renseignements avec le gouvernement.

On ne peut ignorer le fossé numérique. Le fossé numérique représente un défi dans les Caraïbes. Même si certaines régions des îles ont un excellent accès à Internet à large bande, un grand nombre d’entre elles ont un accès très limité, voire aucun accès. C’est pour cette raison que selon M. Fletcher, il faut tenir compte du fossé numérique lorsque vient le temps de parler de données ouvertes, car il peut isoler davantage les personnes et les collectivités non connectées.

La culture du secret et de la rétention de renseignements doit changer. Cela est particulièrement vrai pour les employés de la fonction publique, qui peuvent retenir de l’information parce qu’ils croient que cela leur confère plus de pouvoir et leur assure une sécurité d’emploi. M. Fletcher juge que cette façon de voir l’information publique et le pouvoir est contre-productive. Selon lui, les employés valent davantage et se rendent indispensables lorsqu’ils communiquent les renseignements demandés et démontrent leur capacité à utiliser ses renseignements sur diverses plateformes et dans de nombreux secteurs. M. Fletcher insiste sur le besoin pressant de changer cette culture du secret et du pouvoir et de favoriser l’ouverture des données pour le bien de la population.

Pour voir l’entrevue dans son intégralité, cliquez sur le lien suivant :



August 11, 2015 Heather McIntosh

By Heather McIntosh

Alexander Howard, The Huffington Post’s senior editor of technology and society, interviewed James Fletcher, Saint Lucia’s Minister for the Public Service (Sustainable Development, Energy, Science, and Technology) at the 3rd International Open Data Conference. Although brief, the conversation touched on some important issues pertaining to open data and improving governance.

Here are some of the highlights:

Open data is about strengthening democracy and improving the level of governance. Open data helps citizens learn about what governments are doing, allowing them to make informed assessments of government activities. This allows governments and citizens to have “enlightened conversations” due to open data, empowering both parties in decision-making and informing policy.

There needs to be a two-way data flow between the government and citizens. Although governments play a central role in opening data to improve democracy and governance, Fletcher states that open data is not just about making government data available. It is also about civil society feeling empowered to share their information with the government.

The digital divide cannot be ignored. The digital divide persists as a challenge in the Caribbean; although many areas of the islands have excellent access to broadband Internet, there are also many regions that have little or no connection. Because of this, Fletcher notes that attention needs to be paid to the digital divide when thinking about open data, as it can further isolate non-connected individuals and communities.

The culture of secrecy and withholding information needs to be changed. This is particularly an issue among employees of the public service, in which people want to withhold information because they believe that doing so makes them more powerful and provides job security. This understanding of public information and power is deemed counterproductive by Fletcher; he states that employees are actually more valuable and indispensable if they share information and can demonstrate their ability to use such information across multiple platforms and sectors. Fletcher emphasizes the pressing need to change this culture of secrecy and power and to incentivise the opening of data for the public good.

See the complete interview here:



August 7, 2015 IODC

Alexander Howard, Le rédacteur principal du Huffington Post, technologie et société, s’est entretenu avec DJ Patil, chercheur principal en matière de données au sein de l’Office of Science and Technology Policy des États-Unis, au sujet des données gouvernementales, des données ouvertes ainsi que des modifications et des activités nécessaires pour lancer la révolution des données.

Pour commencer, M. Patil souligne la grande importance des données ouvertes, puis explique qu’elles font partie intégrante de notre existence et sont nécessaires à notre développement et à notre essor. Sans les données ouvertes, nous ne pouvons comprendre nos problèmes et sommes incapables de nous améliorer. Les données ouvertes nous permettent d’évaluer les problèmes et de trouver des solutions. Par exemple, M. Patil traite de l’importance croissante de la démocratisation des données organisationnelles, en vertu de laquelle tous les membres d’une organisation ont accès à ses données. Ainsi, les organisations sont mieux outillées pour se comprendre et, par conséquent, accroître l’efficacité et l’efficience de leurs activités.

Bien que MM. Howard et Patil abordent différents aspects des données ouvertes, l’explication de M. Patil sur les trois questions à prendre en considération afin d’optimiser les avantages des données ouvertes pour le public, tout en réduisant les préjudices au minimum, constitue l’un des points saillants de l’entrevue. M. Patil divise sa réponse en trois points principaux :
1. les données ouvertes doivent être considérées comme un écosystème;
2. nous devons réfléchir à la meilleure façon de produire des données ouvertes et nous assurer que ces données sont suffisantes;
3. nous devons mieux cerner les organismes rédacteurs de normes et assurer leur bon fonctionnement.

Pour visionner l’entrevue complète, utilisez les renseignements fournis ci-dessous.



August 7, 2015 IODC

Alexander Howard, The Huffington Post’s senior editor of technology and society, sat down with DJ Patil, Chief Data Scientist at The United States Office of Science and Technology Policy to talk government data, open data, and the changes and activities needed to propel the data revolution.

Patil begins by emphasizing the great importance of open data, explaining how it is an essential element of our existence that is necessary to further develop and flourish. Without open data, we can’t understand our problems, and therefore can’t get better. With open data, we are able to assess problems, and identify solutions. For example, Patil discusses the emerging importance of organizational data democratization, wherein all members of a company have access to the organization’s data. In this way, organizations are better equipped to understand themselves and, in turn, increase the efficiency and effectiveness of their operations.

Though Howard and Patil discuss various aspects of open data, a highlight of the interview is Patil’s explanation of the three considerations required to maximize the returns of open data for the public, while minimizing harm. Patil breaks his answer down into three main points:
-Open data needs to be understood as an ecosystem.
-We need to think about what is the right way to produce open data and ensure that it is sufficient.
-We need to better define and implement standards bodies.

Check out the video below for the complete interview:



August 6, 2015 IODC

Alexander Howard, rédacteur en chef en technologie et société du Huffington Post, a interviewé au cours de la Troisième conférence internationale sur les données ouvertes, Mme Ania Calderón, directrice générale des données ouvertes pour la Coordination de la Stratégie numérique nationale du cabinet du président du Mexique, afin de discuter du rôle que jouent les données ouvertes au Mexique et des manières dont elle s’en sert pour éclairer l’élaboration des politiques et favoriser le développement.

Mme Calderón commence en disant que ce n’est pas d’hier que le gouvernement doit fournir de l’information au public. Toutefois, les activités du gouvernement du Mexique en matière de données ouvertes sont récentes. En fait, les conversations concernant la politique à leur sujet remontent à un peu moins de deux ans.

Mme Calderón explique qu’aux premiers stades des données ouvertes et de l’élaboration de la politique, le gouvernement du Mexique se concentrait plutôt sur la publication. Il cherchait d’abord à partager les données, puis à les peaufiner et à les raffiner après les avoir diffusées. Presque deux ans plus tard, l’approche a changé : il ne cherche plus seulement à produire des données ouvertes, mais bien à en gérer l’impact. Cette idée est au cœur de la manière dont le gouvernement du Mexique structure maintenant sa politique en matière de données ouvertes.

Pour donner un exemple concret de cet impact, Mme Calderón parle de la mortalité maternelle, un problème qui n’est pas nouveau dans son pays, mais qui est toujours pressant. Son équipe s’est adressée au ministère de la Santé, où elle a trouvé une mine de données sur la santé et la mortalité des mères. Or, on n’avait pas encore utilisé ces données pour tenter de mieux comprendre le problème ou de créer de meilleures interventions. En collaborant avec des spécialistes des données et des universitaires, l’équipe a réussi à mieux saisir l’influence des variables qui contribuaient à la mortalité maternelle, comme le niveau de scolarité et le nombre de visites prénatales, ce qui lui a permis de s’attaquer plus efficacement au problème.

M. Howard et Mme Calderón discutent plus longuement de questions propres aux données ouvertes et au développement, ce qui renforce la conclusion que les données ouvertes et les analyses sont essentielles pour faciliter les interventions ciblées indispensables au développement.

L’entrevue intégrale suit.



August 6, 2015 Heather McIntosh

By Heather McIntosh

Alexander Howard, The Huffington Post’s senior editor of technology and society, interviewed Ania Calderón during the 3rd International Open Data Conference. Calderón, the General Director of Open Data at the Coordination of National Digital Strategy in the Office in the President of Mexico, discussed the role of open data in Mexico and the ways in which her work with open data seeks to inform better policy and foster development.

Calderón begins by stating that the need for information provided by the government to the public is not new. However, the Government of Mexico’s activities in and pertaining to open data are; in fact, conversations on open data policy have been going on for just under two years.

Calderón explains that in the early stages of open data and policy, the Government of Mexico’s focus was publishing. It was primarily interested in sharing data and then polishing and refining it after its release. Almost two years later, the focus has shifted: it is no longer just about delivering open data–it is now about delivering impact. This idea is fundamental to how the Government of Mexico is structuring open data policy at present.

To exemplify a specific instance of impact, Calderón explains the issue of maternal mortality; a problem that is not new, but continues to be pressing in Mexico. Her team approached the ministry of health to find a wealth of data being collected around maternal health and morality. However, it had yet to be used to try to increase understanding of the issue or create better interventions. Collaborating with data scientists and academics, they were able to better understand the influence of variables contributing to maternal mortality, such as level of education and number of prenatal visits, allowing them to address the problem in a more effective manner.

Howard and Calderón further discuss issues pertaining to open data and development, reinforcing that open data and data analytics are essential to facilitating the targeted interventions essential to development.

See the complete interview below:


Use of cookies

This site uses cookies in order to improve your user experience. By continuing to use the site, you are agreeing to the use of cookies and accepting our cookies policy. .

ACEPTAR