Key take-aways from the Measurement Action Track at IODC 2016

During the IODC 2016, the “Action Track: Measurement and Increasing Impact” sought to review the need and role of research for (scaling) open data practice and policy. The track was informed by the various sessions and workshops that took place at the Open Data Research Symposium prior to the Conference. In what follows, we summarize what we heard throughout the conference.

Headline message that emerged from our engagement with the community at IODC: To realize its potential there is a need for more evidence on the full life cycle of open data – within and across settings and sectors.

Many participants acknowledged and shared progress toward gathering evidence on developments, actors and conditions that impact open data. Yet, a consensus emerged that more systematic research is still needed. An “evidence-based and user-centric open data” approach is necessary to drive adoption, implementation, and use.

In particular, three substantive areas were identified that could benefit from interdisciplinary and comparative research:

Demand and use: First, many expressed a need to become smarter about the demand and use-side of open data. Much of the focus, given the nascent nature of many initiatives around the world, has been on the supply-side of open data. Yet to be more responsive and sustainable more insight needs to be gained to the demand and/or user needs.

Conversations repeatedly emphasized that we should differentiate between open data demand and use. Open data demand and use can be analyzed from multiple directions: 1) top-down, starting from a data provider, to intermediaries, to the end users and/or audiences; or 2) bottom-up, studying the data demands articulated by individuals (for instance, through FOIA requests), and how these demands can be taken up by intermediaries and open data providers to change what is being provided as open data.

Research should scrutinize each stage (provision, intermediation, use and demand) on its own, but also examine the interactions between stages (for instance, how may open data demand inform data supply, and how does data supply influence intermediation and use?).

Several research questions were proposed including the following: What is the demand for open data – and do interest groups understand the potential value open data conveys for them? If so, how to study these interest groups? Who are the audiences of open data? What are the different types of users (and users of users)? What are their needs? What are the problems or opportunities current and potential users seek to address using open data? When do users become producers of data and vice-versa? What is the role of data intermediaries in providing and using open data? How to study and establish feedback loops between open data users, intermediaries, and providers that can help make open data more relevant to users?  Do we need professional standards for different types of users – such as, for instance, data journalists?

Unfortunately – besides traditional UX research methods – no method exists for data holders and/or users to assess demand and use in a manner that can inform design and policy requirements.

Next steps:

  • Toward that end, it was suggested to create a collaborative effort to develop a “diagnostic tool or method” to map and analyze the ecosystem of open data toward better understanding the needs, interests as well as power relations of different stakeholders, users, non-users and other audiences.
  • In addition, to be more deductive, explanatory, and generate insights that are operational (for instance, with regard to which users to prioritize) several IODC participants recommended to expand the development and exchange of “demand and use” case studies based on interdisciplinary perspectives (and going beyond a descriptive collection of examples).

Informing data supply and infrastructure: Second, we heard on numerous occasions, a call upon researchers and domain experts to help in identifying “key data” and inform the government data infrastructure needed to provide them. Principle 1 of the International Open Data Charter states that governments should provide key data “open by default”, yet the questions remains in how to identify “key” data (e.g., would that mean data relevant to society at large?).

Which governments (and other public institutions) should be expected to provide key data and which information do we need to better understand government’s role in providing key data? How can we evaluate progress around publishing these data coherently if countries organize the capture, collection, and publication of this data differently?

Next Steps: Several steps were suggested to enable policy and decision makers in prioritizing data sets and allocating resources to do so, including:

  • Develop decision trees that compare and integrate evidence on the demand, benefits and risks of data-sets;
  • Identify and analyze “data deserts” – where no or little data is collected and made available;
  • Develop and provide assessment frameworks for National Statistical Offices on the potential value of certain data-sets.

Impact: In addition to those two focus areas – covering the supply and demand side –  there was also a call to become more sophisticated about impact. Too often impact gets confused with outputs, or even activities. Given the embryonic and iterative nature of many open data efforts, signals of impact are limited and often preliminary. In addition, different types of impact (such as enhancing transparency versus generating innovation and economic growth) require different indicators and methods. At the same time, to allow for regular evaluations of what works and why there is a need for common assessment methods that can generate comparative and directional insights.

Next steps: Joint efforts were recommended to develop

  • Data-value chain assessment mechanisms that can identify and illustrate how value gets generated (if at all), at what stage and under which conditions;
  • A conceptual framework that can accommodate the (e)-valuation of data as an infrastructure or “commons” (similar to other public interest resources such as green spaces or air quality).

Research Networking: Several researchers identified a need for better exchange and collaboration among the research community. This would allow to tackle the research questions and challenges listed above, as well as to identify gaps in existing knowledge, to develop common research methods and frameworks and to learn from each other. Key questions posed involved: how to nurture and facilitate networking among researchers and (topical) experts from different disciplines, focusing on different issues or using different methods? How are different sub-networks related or disconnected with each other (for instance how connected are the data4development; freedom of information or civic tech research communities)? In addition, an interesting discussion emerged around how researchers can also network more with those part of the respective universe of analysis – potentially generating some kind of participatory research design.

Next steps: To enable networking and increased matching of expertise, needs and interests, resources and efforts must be directed toward:

  • A collaborative (and dynamic) mapping of the current open data research eco-system – identifying both the supply and demand for research; and how research questions and methods of different research disciplines already intersect and could cross-pollinate each other;
  • Network analysis of the open data research universe to identify gaps and hubs of expertise (including, for instance, possible correlation analysis of participants of different open data-related conferences);
  • Experimentation with participatory research design – not only to study “the user”, but to study open data “with the user”;
  • Experimentation and evaluation of different networking and collaboration platforms. This may increase our understanding of the usability and usefulness of existing research networks.

To conclude, the different papers that were submitted and presented at the Open Data Research Symposium (all downloadable from odresearch.org) and the growing literature on open data (see for instance ogrx.org) indicates that much progress has been made toward an enhanced understanding of open data –its suppliers, users and practices. Yet as the Open Data community matures, more evidence is needed to guide future investments and uses. Ultimately the open data community should “walk the talk” and become more data-driven – which means that more investment is needed to support research and network the evidence and expertise that already exists.

This blogpost intends to start a conversation how we can better research open data. We invite everyone interested in this important area to discuss with us the possible research topics proposed in this blogpost. How can we operationalize the topics described above? Are any important research areas missing? Please feel free to use established venues including the Network of Innovators and the Open Knowledge Discuss Forum for Research and Policy. This allows us to create central discussion channels that will bring interdisciplinary research interests together.

See also: Open Data Research Symposium


Featured image: International Open Data Conference.


Principle Four of the Open Data Charter, adopted a year ago, recognizes that open data should be easy to compare and be presented in structured and standardized formats to support interoperability, traceability, and effective reuse. The Charter calls on data producers to:

  • Implement consistent, open standards related to data formats, interoperability, structure, and common identifiers.
  • Ensure that open datasets include consistent core metadata and are made available in human- and machine-readable formats.
  • Ensure that data users have sufficient information to understand the source, strengths, weaknesses, and analytical limitations of the data.
  • Engage with standards bodies to encourage increased interoperability between existing standards and support the creation of common, global data standards.

Open Data Standards Day on October 5, 2016 at the upcoming 4th International Open Data Conference is focusing on three areas where we can start turning the Charter’s principles into practice.

Can we get some basics right when publishing open data?

Jose Luis Marin from EuroAlert.net will further his arguments outlining simple rules to follow to ensure that data is reusable. Deirdre Lee from Derilinx will talk about the work of the W3C Data on the Web Best Practices group and the Open Data Technical Framework developed for Data.gov.ie. Beata Lisowska from Development Initiatives will talk about the challenges implementing DCAT and offer up the Development Data Hub as a typical example of a portal in desperate need of metadata. This session will aim to reach a common approach to deploying metadata and put out a call to all data producers to join us in these efforts.

Joining up data on beneficial ownership

Grand corruption is a networked problem; to fight it we need a networked solution, and nowhere is this more true than who controls and benefits from companies. Hera Hussain from OpenCorporates will lead a workshop on the Global Beneficial Ownership Register alongside Kristen Robinson from The Web Foundation and Georg Neumann from Open Contracting Partnership which is tackling this by collecting and linking together beneficial ownership datasets and self-disclosed corporate data. By doing this, it will enable investigators and corruption fighters to see the connections between ownership of companies and trusts the world over, and allow governments and business to more easily see who they are really doing business with. The session seeks to explore user needs and use cases while testing assumptions on what the register needs to deliver to provide the transparency needed to reduce fraud and corruption.

Identifying organizations

Consistent use of identifiers across datasets is vital to enable joined up data, and activities that can ‘follow the money’ across different datasets. However, data publishers frequently struggle to use organisation identifiers consistently, and the lack of authority lists of public bodies means government entities are rarely referenced consistently. This is a problem facing a range of standards-setters and, when the Joined-up Data Alliance was formed at IODC 2015 in Ottawa, this went to the top of the agenda. Open Data Services has won the backing of the International Aid Transparency Initiative, Open Contracting, Natural Resource Governance Institute, OpenAg and 360 Giving to build a common platform with a universal methodology. Steven Flower will reveal a minimum viable product and lead a workshop exploring use cases and user needs.

Open forum

Open Data Standards Day will kick off with an open forum for anyone to present their work dealing with practical solutions that improve the comparability and interoperability of data. If you would like to join us on October 5, please register here. If you would like to present your work, please drop a line to Bill and James. We will also highlight the outcomes from Open Data Standards Day at the Action Session on Standards at 8:30–9:30 on October 7 in IFEMA Room B.

Featured image: Marius Neugebauer


September 30, 2016 ODI Madrid

ODI Madrid is a node of the Open Data Institute that aims at creating learning, market value and an appropriate ecosystem for a good use and reuse of open data through the delivery of specific ODI training courses, the organization of events and the promotion of relevant information for the sector.

For that end, ODI Madrid is supported by the Ontology Engineering Group (OEG) (www.oeg-upm.es), in which a team of researchers in the areas of open data, ontologies, Semantic Web, linked data, multilingualism and open science work closely with industry partners that use open data as a fundamental tool: smart cities, public administrations, digital humanities, education…

Open Data in Half a Day will take place next October 4 from 3:30 pm to 7:30 pm in Room D at the IFEMA North Convention Center, set within the pre-events of the International Open Data Conference,

The course, organized by the ODI Madrid team –also members of the Ontology Engineering Group (OEG)– will carry its participants through an itinerary of a practical nature, starting with a short presentation of the basic concepts of open data. This introduction will allow participants to share their own ideas and expertise, and right afterwards it will proceed with an interactive session.

The Methodological guide for sectoral open data plans, which Red.es and the Aporta Initiative published in August, demonstrates the existence of a strong infomediary sector, as well as an active data market “that generates a representative volume of business and offers a series of qualified and quality job positions.” However, this confirms that we are still far from the profits expected during the initial studies. The report analyzes the current state of affairs of open data and states that we need to “promote a dynamic ecosystem around open data, capable of generating solutions”, which will help overcome the obstacles and specific challenges arising in each sector. Among the aspects defined as necessary to create this ecosystem and overcome cultural obstacles to come, some privacy, legal, organizational, semantic and technical elements were established, all of them concluding with a training session. This is, then, the last obstacle to overcome; essential to count on the opportunities they provide us to launch any project having open data at its core.

At the Open Data in Half a Day course, there is an introduction to open data, including a section with the basic terminology to elaborate a good dataset, information about their publication, and an outline of the necessary licenses to use databases. To close the course, there will be a presentation of different cases of open data use. This closing session is established here as the core of the course, which will capacitate participants to address the benefits and opportunities of open data as raw material.

The main objectives of this training session are: identifying a list of formats for the publication of data and a list of repositories to find them; the application of the five open and linked data principles, and the actual application of copyright and open data-related licenses.

After the course, participants will know how to recognize the benefits of open data to apply them at their organization and will be able to use them, addressing success stories and projects launched thanks to a good application of open data.

Attendance to the course –aimed and organized for businessmen, public and/or private sector employees, people studying a master’s degree or major– is free of charge, and a form has been elaborated for those interested to send their information and interests prior to its celebration.


Cover photo by Caleb George.


September 29, 2016 Georg Neumann

Open contracting is collaborating, communicating and using technology for innovating development – all of which have been a focus of Georg’s professional career prior to joining the Open Contracting Partnership where he manages communication and advocacy. At the Inter-American Development Bank, Georg has led the Digital Strategy of the Multilateral Investment Fund focussed on supporting micro, small and medium businesses and entrepreneurs in Latin America. He also co-initiated the organisation’s portfolio of projects testing social innovations such as crowdfunding. The nexus to transparency and anticorruption originated at Transparency International where he managed online and internal communications of the global movement and led the discussion of solutions to use technology to fight corruption. He has worked in development projects in Mexico and Morocco and holds a Masters in Strategic Economic and Social Communications from the University of Fine Arts in Berlin.

Schools, hospitals, streets and, for me, bike lanes: This is where citizens want their money to go. How does a government department make sure it gets the best company to provide what it needs for the best value? How can a business know what a government buys, and when?

When we started with the idea of open contracting, we knew we had to make sure these different needs were addressed, and to understand “who, what, where, when, and how” a government spends its money.

But transparency and openness is only half of the story. It is about creating a better, fairer and open process from planning to delivery. In a way, like tracking a package when doing your online shopping: you want to receive what you bought intact, at the expected delivery time – and it can be quite satisfying to see when your package leaves the storage facility. Government contracting is obviously much more complex, but the same principle of reducing any nasty surprises from the process applies.

For this, we’ve developed the Open Contracting Data Standard, a schema that describes the information and documents for the full process of government contracting that should be readily available for anyone to access. Whether to investigate corruption or find the most relevant bid, you will need to connect different types of data to draw meaningful conclusions. The Standard is designed with four key users in mind: value for money, detecting fraud and corruption, fairer competition, and monitoring service delivery.

Open contracting has seen a huge wave of commitment and implementation since the 2015 IODC in Canada. I want to briefly share with you five innovations that have surfaced over the last year.

1) Fighting corruption and restoring trust

Ukraine has been known for widespread corruption and cronyism. It’s not surprising then that addressing these risks is at the center of Prozorro, the country’s new public procurement system, built on open contracting. Beyond uncovering fraud and waste, a crucial objective of Prozorro is to restore the trust of businesses, who had long given up even participating in government tenders.

This open source e-procurement platform, which is modeled on the Open Contracting Data Standard, provides access to a centralized, dynamic database so that other firms can offer additional services to the private sector, tailored to its specific needs. Thanks to this innovative, open process, participation by companies in public tenders has risen by 30-50%.

2) Connecting cross-national data

OpenOpps is a platform that pulls in, shares and analyzes tender data from across Europe. While this repository of open contracting data can be used to investigate fraud and irregularities, it also serves as a commercial website that helps businesses find opportunities to trade with government. As of July 2016, it has collected a quarter of a million documents from across Europe.

3) Tracking education and health services

When Seember Nyager and her team at the Public Private Development Centre in Nigeria looked at the government’s basic services, they realized they not only had to look at the budget, but also how money was being spent through contracts. Connecting these two datasets was crucial to understanding what was happening in service delivery. A fascinating investigation ensued (see here) – and Budeshi, a Hausa word for “open it” – was born. This open source, Nigerian-developed open contracting platform, has helped to convince the government of the benefits open contracting and led it to commit to implementing the Data Standard. Organizations in Kenya and Malawi are now looking at using Budeshi as well.

4) Opening up public-private partnerships

Public Private Partnerships are one of the tools governments use to tackle bigger projects and investments, often large – and challenging – infrastructure such as roads, bridges and airports. These projects last decades, not years, and have to deal with many risks throughout their lifetime. Given the high level of complexity, it is essential to have a structure for effectively tracking the progress of these projects, as well as transparency.

In Mexico, the Ministry of Communications and Transport has started disclosing information and documents on Red Compartida, the country’s largest PPP project, which is set to create a shared 4G telecoms network.

5) A closer feedback loop

Cities are where an administration’s spending is nearest to the needs of citizens. What are your kids being fed at school? Does the public transport network work? It’s no surprise that cities, eager to show what’s being done for their communities and to generate opportunities for business, are excited about the potential to create more transparency around contracts. Mexico City has been the first to implement the Standard, from contract planning to implementation, and has started publishing contracts from its finance department. In working with civil society, Mexico City has identified more information needs. Montreal, driven by major corruption scandals, has launched a visualization platform to explore its contract data. It’s encouraging to see Buenos Aires looking into publishing contracting data from the Argentine capital as well.

The field of open contracting implementers is growing, and spreading globally. Hopefully, these examples have given you some ideas for how open contracting can be useful for you and your area of work. Our helpdesk is there to help anyone who’s interested (shoot them a message at data@open-contracting.org).

We look forward to discussing these and other examples from Albania, Canada, Colombia, France, Kosovo, Moldova, Paraguay, Spain, the US, Vietnam, Zambia and others at our IODC session on October 4, Open Contracting: Progress, Challenges, Innovation. (Register here to not miss out) and throughout the variety of panels and sessions during the week.

Do you know what your government is spending its money on?


Cover photo by Jared Doyle


This blog is part of a short series by The Center for Open Data Enterprise. The purpose of the series is to highlight use cases from the Open Data Impact Map, which will be launched October 3, 2016 in preparation for the 4th International Open Data Conference.

Around the world business, nonprofits, governments, and citizens are using open data as a valuable resource in their work. The demand for open government data provides a compelling rationale for growing open data programs. Understanding who uses open data, and how it is used, helps prioritize the most important datasets. It can help inform investments in open data, and inspire novel uses of this public resource.

One example of open data use is Farmerline, an SMS-based service that provides small-scale farmers with up-to-date agricultural information and advisement based in Ghana. Farming accounts for over a fifth of the country’s GDP, and 42% of the workforce. Many farmers in Ghana operate small-scale farms and face an uphill battle for agriculture information due to a lack of Internet access and the diversity of local languages.


Farmerline addresses both of these issues, providing local farmers access to accurate, up-to-date weather forecasts and market prices. In turn, the farmers gain immediate knowledge of competitive pricing and if not larger, steadier yields due to the agronomic tips and support.

Open government data is at the heart of this effort. Launched in 2013, Farmerline gathers open data on weather and market prices, and then repackages the information into outbound text and voice SMS messages, servicing over 5000 farmers in Ghana and available in nine different local languages. Fully incorporating voice messages in its service, Farmerline has expanded to farmers in rural areas, where literacy rates remain under 50%.

This example, and many others, will be featured on the Open Data Impact Map (www.OpenDataImpactMap.org), officially launching October 3rd for the 4th International Open Data Conference. The Open Data Impact Map is a public database of organizations using open data from around the world. It allows users to explore a database of 1700+ organizations in from 80+ countries, and learn more about the types of data they are using, how they are using it, and the impact it has on their work.

The Open Data for Development (OD4D) network in partnership with the Center for Open Data Enterprise will be launching the Open Data Impact Map at IODC. For more information, visit the OD4D booth during the conference.


Cover photo by Marta Serrano


Ana Alvarez is Web Manager at the Museo Thyssen-Bornemisza of Madrid. She has a BA in Medieval History (Complutense University, Spain) and a Master in Museum Studies (Leicester University, UK). Her professional career has developed within digital content in the cultural sector regarding digitization, websites and information management, not only at museums. She has been consultant at ICT projects related with cultural heritage and digital educational content project for the Spanish Education repository in Red.es (Spanish Ministry of Industry); Spanish Representative at the European NRG (National Representatives Group on Digitisation in Culture) which was the seed of European Commission’s Digital Culture projects such as Europeana. Back in the museum world, she is involved not only with the web management but with digital curation strategical issues. She is a member of the Board of SEDIC involved in promoting training and dissemination initiatives related to information related professionals.

César Iglesias is partner at Kuroshiro and an attorney and consultant with over 13 years working experience in the fields of Information Technologies (IT) and Intellectual Proper ty (IP), both as an in-house and external counsel and as a consultant for private companies and Public Administrations. He has degrees in both Law (ICADE) and Economics (UNED) and an International Executive Master in Business Administration (IE). He also holds a CISA (Certified Information Systems Auditor) certificate. He is member of Madrid’s Bar Association, Information Systems Audit and Control Association (ISACA) and the Spanish Association for the Study and Teaching of Copyright (ASEDA) where he is the General Secretary. In the public sector, César Iglesias has acted as legal counsel for Red.es in the development of Electronic Administration Services and as an independent expert for the European Commission in projects for the development of Europeana and the Open Data initiative. In the private sector, César Iglesias, acting as independent attorney, has provided legal advice on compliance and personal data protection regulations for many years. Cesar Iglesias has also acted as Legal Manager for a number of companies, including the ‘Sociedad General de Autores’ (SGAE) and, currently, ISDE. He has written numerous articles and books on IT Law and Copyright. He is a regular lecturer in a number of postgraduate courses and is a regular speaker in seminars and conferences.

At the end of 2015, the law that establishes the rules for the re-use of the information of the public sector in Spain was modified in order to make compulsory for public libraries, museums and archives to make their data reusable. This modification followed the modification in 2013 of Directive 2003/98/EC on the re-use of public sector information, also known as the “PSI Directive”.

The law has raised several questions: Is a work of art such as a sculpture or a book “data”? What is considered “data”? How and when should memory institutions publish their data? Is Open Data synonymous with data reuse?

First step: Digitisation

Online cataloguing and digitisation are the current basis for the asset management of cultural institutions. For many years and together with a huge resource investment, these institutions have made an enormous effort to reorganise their data in order to be able to share it, at first internally, and nowadays mostly through their websites or services and digital content projects (retrieval tools, collaborative projects with other institutions, mobile apps, interactive galleries, audiovisuals, etc.). The progress has not been even among memory institutions, and libraries have been at the avant-garde partly due to a more extensive standardisation of their data sets. Archives and museums, due to the particularities of their collections, started later with the normalisation but it is already being implemented and are catching up.

Reuse of public sector information and Open Data

“Reuse of public sector information” is a concept derived from the intuition that data already collected by public sector bodies, and already paid by the tax payer, can have a “second life” in the private sector.

Meteorological data is the classic example to explain this. Meteorological data is too costly to collect for most institutions and enterprises in a position to deliver it to the interested people (such as TV companies). Sharing this information with the private sector allows this information to be accessible to more people (through open TV channels, for example), render a better public service (a hurricane warning) and allows the creation of additional added value.

The reuse of the information of the public sector not only aims to foster transparency in public sector, which are clearly social and political goals, but also to fully develop the economic potential of public sector information. This potential comes to action, for example, by facilitating the development of new products, services, solutions and job creation in the digital content industry where content the major driver of growth. This pragmatic approach sometimes is not well understood or shared by the cultural sector, with a more altruistic point of view[1].

The evolution from “reuse of public sector information” to “Open Data”, i.e. automated reuse of public sector information, has been possible thanks to semantic web developed by W3C, The World Wide Consortium, since the nineties. Linked Data is a method of publishing structured data so that it can be interlinked and become more useful through semantic queries. The application of standard web technologies such as HTTP, RDF and URI to the cultural field or LAM (Libraries, Archives and Museums) is known as LODLAM or Linked Open Data for memory institutions. Its adoption in projects like Europeana has enabled not only to link the metadata of different collections but also to allow the results to be reused immediately by third parties[2].

We may now come back to the questions arising from the approval of the new Spanish law regarding the reuse of data from LAM.

What is understood by data regarding cultural assets?

Data comprises not only the digital depiction of the work of art, book or the information made available through a website or catalogue, but also the metadata. Therefore, cultural institutions need to prepare their digital information not only to provide contents but also to allow third parties to access and reuse their metadata and raw data derived from the digitisation of texts, images, audios and videos.

How to publish open data?

Linked Data representation is the first, technical, step towards open data. A second, legal, one is the approval of a license for the reuse of the data. Whether a linked data project is open or not shall be determined by the nature of the licence that the project is used.

Following the 5-star deployment scheme for Open Data, developed by Tim Berners-Lee,[3] an open license is the basis for the first level of openness but automatic reuse, necessary for Open Data, can only achieved from third level upwards. This scheme measures how well the data is integrated into the web but no other aspects such as ease of reuse that shall be the subject of Open Data certificates that are currently being developed.

Besides, the election of open licenses to enable reuse is not a minor issue as it has a major effect on the quantity and quality of the reutilization of the data. In the cultural field there is no consensus on which licence is the most adequate. Creative Common’s CC0 and Public Domain licences are clearly considered “open” because they waive the exercise of any copyright over the data. Licences that require the user to acknowledge the author and to use the data only for non-commercial purposes, when applied to data sets, may raise doubts about the “openness” of the project. It should be noted that national legislations may limit the licences that may be used by the public sector.



Is Open Data synonymous with data reuse?

Open Data goes beyond the minimum legal requirements for public sector data reuse.

However, Open Data allows the development of reutilization strategies in order to:

  1. Maximize the cost reduction of the handling of reutilization requests.
  2. Increase of the visibility of the institution.
  3. Improvement of the quantity and quality of the contents provided in the Institution’s website.
  4. Fulfil the public duty of the institution of making its contents available to the public.
  5. Improve internal management of the Institution.
  6. Develop the research and knowledge on the works held by the Institution.

These issues regarding Open Cultural Data will be discussed at a pre-event to the International Open Data Conference 2016. The pre-event will be held at the Spanish National Library, and it is being organised by the Spanish Ministries of Culture and Industry and SEDIC (professional association of Information and Scientific Documentation).

A full-version of this post in Spanish can be found at SEDIC Blog here.


[1] Henninger, Maureen. The Value and Challenges of Public Sector Information. Cosmopolitan Civil Societies: An Interdisciplinary Journal. 2013, Vol. 5 Issue 3, p75-95. 21p. Available at: https://epress.lib.uts.edu.au/journals/index.php/mcs/article/download/3429/3851 (accessed 1/08/2016).

[2] http://labs.europeana.eu/api/linked-open-data-introduction

[3] http://5stardata.info/


Cover photo by James Kemp


September 14, 2016 Aporta Initiative

Aporta is the national initiative of the Spanish Government for promoting the Re-Use of Public Sector Information (PSI) and fostering the open data culture among the Public Administration and the Society.

Every year, the open data community in Spain has an appointment to discuss and address the challenges of openness and reuse of information in the country: the Encuentro Aporta. This event is designed to be a discussion forum for Spanish speaking’ countries on which all agents involved in the open data movement are invited: from infomediaries and reusing entities to experts and public officials.

The Encuentro Aporta is part of the lines of action of the so-called Iniciativa Aporta, a national project that, since 2009, has been working towards the awareness and promotion of the re-use of public sector information in Spain. Thus, this initiative includes its work program in seven lines of action, ranging from the custody of the national open data catalogue, support and consulting services for public administrations in the re-use of information, driving international cooperation, advice on the development of the regulatory framework to the development of training materials.

This year, celebrating the International Open Data Conference in Madrid, the Encuentro Aporta 2016 is framed within the program of pre-events of the IODC. Entitled Global cooperation, local impact, this sixth edition is structured in three work sessions. The first of them, called Coordination and harmonization as the key to success in publishing, will bring together representatives of public entities showing innovative open data projects in five different areas: tourism, environment, culture, Smart City and university.

Further on, there will be a session dedicated to the business sector, analyzing three case studies: Carto, Euroalert and TerceroB, which all demonstrate the potential of open data to develop innovative products and services. Eventually, during the third session, local and global projects that promote the following five areas of the open data roadmap will also be introduced: Charter, standards, skills and training, innovation and measurement.

The Encuentro Aporta 2016 is taking place on Monday 3 October, from 8.30 to 14.15 pm at the headquarters of the Ministry of Industry, Energy and Tourism located on Capitan Haya Street, 41, in Madrid.

Due to capacity limitations, and in order to attend this event, an online form specifically created for the event shall be previously filled in. The registration period is already open.

Encuentro Aporta


Cover photo by Felipe Santana


September 1, 2016 Juan Llorens

Juan Llorens is a telecommunications engineer and civil servant currently working at the Secretary of State for Telecommunications and Information Society, Spain

The growth of Internet and ICTs generates such an overwhelming amount of text in electronic format that is already beyond human limits, so that automatic utilization of these textual resources is becoming an urgent matter.

Language Technologies are a set of diverse technologies that set the path to a deeper automatic understanding of human language. These comprehend Natural Language Processing (NLP) as well as Machine Translation. These are the technologies that allow automatic use of that amount of textual data.

Consequently Language Technologies generate an emerging, innovative and cross-cutting industrial sector.

Organizations accumulate large amounts of text in electronic format which could be fuel for language technologies industry.

These texts are valuable in two ways:

  • Its direct value as raw material to produce relevant information by means of Language Technologies.
  • But not less relevant, it is also very useful to create and train Language Technology itself (A good example is the translation memory of European Commission Directorate-General for Translation, which the most downloaded dataset at European Union Open Data Portal).

We could think even further, since combining Open Data with Language Technologies may enable a new knowledge revolution, a new global Enlightenment.

But to achieve its potential benefits, its specific societal, economic, legal and technical challenges must be faced.

To bring attention to the potential benefits of the conjunction of Open Data and Language Processing Technologies; and to address its specific societal, economic, legal and technical challenges, two events will take place in the context of the International Open Data Conference IODC 2016, that will be held in Madrid (Spain) in October 2016.

The first is a Workshop on October, the 5th (15:30-19:30), where relevant experts in different sides of this polyhedral issue will have time to share and discuss among them and with the audience their different but revealing views and experiences on the matter in a collective effort to shed new and enriched light on it.

You can find more information on this Workshop here: http://opendatacon.org/agenda/pre-events/open-linguistic-data/

The second is an Impact session on October, the 6th (17:00-17:45), where relevant experts will share with us well informed reflections on its challenges and opportunities from different angles, illustrated with use cases where they have been directly involved.

You can find more information on this Impact session here: http://opendatacon.org/agenda/

Attendance is free.


Cover photo by Fabien Barral


August 18, 2016 Georg Neumann

Open contracting is collaborating, communicating and using technology for innovating development – all of which have been a focus of Georg’s professional career prior to joining the Open Contracting Partnership where he manages communication and advocacy. At the Inter-American Development Bank, Georg has led the Digital Strategy of the Multilateral Investment Fund focussed on supporting micro, small and medium businesses and entrepreneurs in Latin America. He also co-initiated the organisation’s portfolio of projects testing social innovations such as crowdfunding. The nexus to transparency and anticorruption originated at Transparency International where he managed online and internal communications of the global movement and led the discussion of solutions to use technology to fight corruption. He has worked in development projects in Mexico and Morocco and holds a Masters in Strategic Economic and Social Communications from the University of Fine Arts in Berlin.

On August 1, an unlikely story reached its high. In Ukraine, the volunteer-driven open data project ProZorro, kicked-off just two years ago, transformed and revolutionized the country’s public procurement system.

Governments spend vast sums of public money procuring infrastructure, goods and services, from roads and bridges to food and office supplies. But not all governments are transparent about how these deals are done and how those dollars are spent.

The health sector in Ukraine is a particularly notorious example. Money previously wasted on overprized equipment can now be invested in medicaments to save people’s lives.

The size of many contracts makes them extremely vulnerable to corruption. Globally, governments lose billions of dollars each year through inefficiency, mismanagement and graft in public contracts.

Open contracting is an innovative approach that brings together government, civil society and the private sector to tackle this issue. It’s about making data related to all stages of the procurement process readily available for anyone to access electronically, in order to prevent shady deals and save governments money, to empower citizens to monitor how taxpayer dollars are spent, and to give companies the assurance that they have a fair chance to do business with public agencies.

Last year’s International Open Data Conference was a valuable opportunity for us, and other actors excited about the huge potential of open contracting, to learn how well the technical standard at the heart of this approach, the Open Contracting Data Standard (OCDS), is meeting the needs of its users, and what can be improved. Early adopters of open contracting also shared their experience making the case for implementing the OCDS to policy makers as part of larger procurement reforms. Finally, we formed the Joined Up Data Alliance, teaming up with allies working on complementary data sets, to look at how we can strategically connect data on budgets, aid, contracting, corporate registries and spending.

A lot has happened in the world of open contracting since that meeting in Canada. We’ve been thrilled to see the shift to greater transparency in government contracts gather momentum. The data standard, which was once a theoretical exercise, is now producing real, practical data that’s being used to make better public deals in places from Ukraine to Paraguay, and Mexico City to Montreal. In Taiwan and Nigeria, other actors are experimenting with converting contracting data into data standard format for purposes of analysis, training and advocacy. Meanwhile, more than a dozen governments have made new, bold commitments to adopt open contracting. As of the first quarter of 2016, the Open Contracting Partnership’s helpdesk supports over 55 governments and other partners in 22 countries.

We’re eager to share these exciting new developments with the open data community in Madrid, in a series of sessions, talks and workshops.

At Progress, Challenges, Innovation, a day-long workshop on October 4, we’ll hear from advocates from around the world who’ve implemented open contracting, and discuss the upgrade process of the Open Contracting Data Standard. This will also be a perfect opportunity to work together on lots of the ongoing issues you might be working on directly.

On October 5, we’ll be involved in both the Open Data Standards Day and the pre-event on Follow the Money and Open Data.

For the main program from 6-7 October, we’re looking forward to hearing from Canada and Ukraine who’ll share their impact stories on open contracting. Dedicated sessions will focus on open data in public procurement (with Hivos’ open contracting program and one hosted by the Web Foundation), on joined-up data by our friends at Devinit, and look at opening up extractives and land projects with the Natural Resource Governance Institute, the World Resources Institute and Aiddata.

We hope to see you there, as we share our experiences and celebrate our achievements, and look forward to finding new ways to collaborate.

As we are seeing in Ukraine, the story of how open contracting is transforming government procurement impacting people’s lives just about to start.


Cover photo by Jen Chillingsworth


Open Knowledge International is a worldwide non-profit network of people passionate about openness, using advocacy, technology and training to unlock information and enable people to work with it to create and share knowledge.

We are proud to announce that registration is open for ‘The Open Exchange for Social Change’. This event has been referred to in the past as ‘The Unconference’ during the flurry of pre-event IOCD Madrid announcements made over the past few weeks. The response has already been amazing on social media, and we have received many inquiries about the event itself.

We decided to name it The Open Exchange for Social Change so we can be open and transparent about our goals for the event. The name ‘The Open Exchange’ reflects the guiding principles along with the activities that will take place during in the event.  We aim to create a space where participants will be able to exchange knowledge, understanding and build solidarity that will lead to better outcomes for IODC and beyond.  It is an open space where you will be able to propose topics for discussion that are most relevant and urgent for your work.

The theme that we chose for this year is ‘Social Change’. We want to address more than just “open data” and reflect on the change we are all passionately involved in.  By ‘social’ we mean different and diverse aspects of our lives – from politics, the communities we interact with, to our relationships with the environment, the financial world and even space. We want to explore how open data creates, contributes and supports social change. How do we create positive or negative change and what should we do next?  Your ideas and actions will serve as content for the main conference, and will help us to set the goals for the IODC roadmap.

To have a successful Open Exchange, we created the ‘Buddy system’. Buddies are participants who take an active role in contributing and shaping the event to assure its smooth running and success. Buddies can do the following parts:

– Leading/Facilitating a session

– Supporting new facilitators to run sessions

– Help to document the event and session outcomes

– Making sure that the Unconference will feed the main conference by using your social media skills

To support your contribution as a Buddy, we will be providing an online session where we will give an overview of The Open Exchange, and tips on Facilitation.

The lead facilitator of the Unconference is Dirk Slater from Fabriders. Dirk has years of experience in facilitating participatory events, and leading successful workshops that focus on civil society and technology, and we are excited to have him on board and share with us the secrets to successful events focused on collaboration.

Anyone who comes to The Open Exchange will be able to facilitate a session. We invite you to register as a buddy ahead of time so that we can prepare and support you as best as we can.

If you are interested in joining The Open Exchange for Social Change, please register to the event in advance on http://open-exchange.net/. You can also find on the site all the information you need about the event. Didn’t find what you are looking for? Write to us on the forum – https://discuss.okfn.org/c/iodc-unconference.

Looking forward seeing you on the 4th of October, 9:30 at Centro de convenciones IFEMA NORTE, Madrid!


Cover photo by Thomas Litangen

Use of cookies

This site uses cookies in order to improve your user experience. By continuing to use the site, you are agreeing to the use of cookies and accepting our cookies policy. .