Tag Archives: open access

Access to Victorian fire data

Yesterday afternoon an interesting story appeared on ZDNet Australia: “Vic Govt limited Google’s bushfire map”. I encourage you to read the full post on ZDNet Australia, but in summary, the post documents Google’s trouble in gaining access to Victorian Government data about the movement of bushfires in Victoria.

According to the post, Google has been working with the Commonwealth Fire Authority, which manages fires on private lands, to overlay the Authority’s data onto Google Maps to produce a real-time map of the locations of the fires. The map also uses a colour scheme to convey the seriousness of the fires: green (safe), yellow (controlled), orange (contained) and red (ongoing).

Naturally, this map is immensely beneficial to those in Victoria and elsewhere who are attempting to track the bushfires.

However, Google has run into some problems gaining access to data to plot fires on public lands. This data is owned and controlled by the Victorian Department of Sustainability and Environment, and is covered by Crown copyright. As such, permission is required from the government before the data can be used, and for Google this permission has not been forthcoming. The result is that Google has been unable to plot this data onto their map.

As noted in the ZDNet Australia post, this is not the first time Google has had trouble accessing and using Australian government data. They were expressly denied permission from the Commonwealth Department of Health and Aging to overlay data from the National Public Toilet Map onto a Google Map.

Why is the government so unwilling to share its data? My guess is that there are two possible reasons. The first is that in some cases, the government has a misguided idea that data can be used to build online systems or services (usually these will be geospatial systems or services) which can be used to generate revenue by charging for access. The other is that the government is naturally risk-averse and would prefer to control their data as tightly as possible.

What the government is forgetting is that it is a representative of the people and the government-owned data has been collected using public funds. We, the Australian public, have paid for that data through our taxes and as such, we should have the benefit of that data. Surely it is most beneficial for the public if we can have ready access to that data in the most efficient and convenient way possible. And if that is through a Google Map, then the government should enable this. There can be no argument that in the face of tragedy such as the Victorian bushfires, the government should not hinder our ability to access as much information as possible about that tragedy. This includes the ability to easily track those bushfires via a Google Map.

Arguments have been made that as the access and use issue can be traced back to Crown copyright, then Crown copyright should be removed, as is the case in the United States where government data and publications are held to be in the public domain. I do not believe that this is the answer. Rather than remove Crown copyright completely, the government should be encouraged to release their material where possible under open licences such as the Creative Commons Attribution licence. This should be the default position, unless access to the material must be restricted due to privacy or national security concerns. The government must engage in a “push” model – where it systematically “pushes” its material out to the community – rather than a “pull” model – where members of the public must seek permission or lodge a Freedom Of Information request to access that material. Crown copyright can serve an important purpose, if only through the operation of the requirement of attribution (a requirement imposed through the Creative Commons licence, similar to moral rights), which requires that the author of a material (in this case, the government) to be attributed wherever the material is reproduced. The requirement of attribution for government copyright material can serve a two-fold purpose – (1) it allows the government to retain some control over the material it produces; and (2) it verifies to the public that the material has come from a reliable source.

Our research group at QUT has done some work on this area. See the auPSI website for more information.

APSR Open Access Publishing: A PKP User Group Workshop

On Thursday 4 December 2008, I attended the Australian Partnership for Sustainable Repositories (APSR) Workshop entitled, Open Access Publishing: A PKP User Group Workshop.

PKP is the acronym used for the Public Knowledge Project, a research and development initiative directed toward improving the scholarly and public quality of academic research through the development of innovative online publishing and knowledge-sharing environments (see “About the Public Knowledge Project”). PKP was founded in 1998 and is located at the University of British Columbia and Simon Fraser University in Canada and Stanford University in California. PKP has developed Open Journal Systems (OJS) and Open Conference Systems (OCS), open sources software for the management, publishing and indexing of journals and conferences.

Professor John Willinsky, Director of PKP, Professor of Education at Stanford University School of Education and author of, “The Access Principle: The Case for Open Access to Research and Scholarship” came out to Australia for the workshop, as did PKP Developer, MJ Suhonos. My notes from Professor Willinsky’s plenary address appear in this post.

The workshop was held at the University of Sydney and continued on Friday 5 December. I was unable to attend on Friday, but my colleague, Professor Anne Fitzgerald of QUT Law School, gave a presentation entitled, “Constructing open access by effective copyright management” and QUT’s DVC, Professor Tom Cochrane, spoke on “The Institutional Perspective on Open Access – dos and don’ts”. The full program can be viewed on APSR’s website.

My notes from Thursday follow.

The workshop was primarily focused on users’ experiences with PKP software. So we heard from Eve Young, Helen Morgan and James Williams from the University of Melbourne, Bobby Graham from the National Library of Australia and Susan Lever, Editor of the Journal of the Association for the Study of Australian Literature about their experiences with using OJS and from Peter Jeffery of the Australian Association for Research in Education (AARE) about using OCS. Generally the feedback was very positive (especially for OJS) but some suggestions for improved usability (particularly for non-tech savvy academics) were also made. Susan Lever spoke about the exciting opportunity that online publishing offers where articles can contain in-text live links to other sites offering additional information, images and videos, which greatly enrich the experience of the reader.

The university ePress was also a topic of the day. Lorena Kanellopoulos informed us about the management and operation of Australian National University (ANU) ePress and Dr Alex Byrne spoke about University of Technology Sydney (UTS) ePress. UTS ePress publishes the journal, Portal, which I believe was the first journal to be published in Australian using PKP software. The main point to come out of Lorena and Alex’s presentations, to me, was that university ePress costs were not high and that universities can publish their own journals, using open source software and a “publish online with a print-on-demand option” approach, successfully and cost-effectively. Dr Geoffrey Borny, Visiting Fellow in the School of Humanities, College of the Arts and Social Sciences and Member of the Emeritus Faculty at the Australian National University, gave a personal account of what it was like to publish a book with ANU ePress. He was a very happy customer, saying that ANU ePress was efficient and professional, and that publishing online had given him much wider exposure than he expected.

For me, however, the most interesting presentation of the day (aside from Professor John Wilinsky’s plenary address, which is covered in a separate post) was from Andrew Stammer, Journals Publishing Director at CSIRO Publishing. As Andrew pointed out, the CSIRO Publishing Charter creates an interesting creative tension between CSIRO Publishing’s commercial role and public interest role by stating that CSIRO Publishing is to:

  1. Operate within CSIRO on a commercial basis with its viability entirely dependent on the capacity to generate revenue and sufficient return on investment (i.e. CSIRO Publishing must fund itself – it apparently receives no funding from CSIRO or the Australian Government); and
  2. Carry a national interest publishing obligation on behalf of CSIRO within this commercial role.

Despite not agreeing with everything that Andrew had to say (I was highly amused to see that he included “lobbying” amongst the publishers’ roles, right up there with “striving for quality in content” and “nurturing relationships”), I thought that his presentation was remarkably well balanced. He spoke about the OA initiatives of CSIRO Publishing, including the publishing of an OA journal – The South Pacific Journal of Natural Science. He explained the publishing process, being that publishers:

  • Acquire content;
  • Review and develop content (facilitate peer review);
  • Prepare content for dissemination;
  • Disseminate content; and
  • Promote content and authors.

Andrew also spoke at length about the costs associated with publishing. And these costs seemed quite incredible to me. For journal publishing of 1162 pages, across 108 articles in 12 issues, printing alone costs $43,166. This cost is quite distinct from costs associated with layout, peer review, promotion or even postage (postage additionally cost thousands of dollars). Much of these costs, I think, could be avoided or massively reduced by online dissemination and print-on-demand services.

Yet what really jumped out at me was a graph that Andrew displayed, which he had acquired from the journal article: Rowlands I and Nicholas D (2006) The changing scholarly landscape, Learned Publishing, 19, 31-55. He showed this under the heading, “What do authors want?” and I was only able to quickly scribble down the order in which the items appeared:

  1. Reputation of journal
  2. Readership
  3. Impact factor
  4. Speed of publication
  5. Reputation of editorial board
  6. Online ms submission
  7. Print & electronic versions
  8. Permission to post post-print
  9. Permission to post preprints
  10. Retention of copyright.

Being a lawyer and an advocate that authors retain copyright in their works and only issue their publisher a Licence to Publish, I was rather concerned about “retention of copyright” being last on a list of “what authors want”.

On Friday morning, I looked up the journal article online, it’s full citation being: Rowlands,I., Nicholas,D. (2006). The changing scholarly communication landscape: an international survey of senior researchers. Learned Publishing 19(1), 31-55. ISSN: 0953-1513.

The article presents the results of a survey “on the behaviour, attitudes, and perceptions of 5,513 senior journal authors on a range of issues relating to a scholarly communication system that is in the painful early stages of a digital revolution” (p31). The survey was conducted by CIBER, “an independent publishing think-tank based at University College London” (p31), in early 2005 and was commissioned by the Publishers Association (PA) and the International Association of Scientific, Technical and Medical Publishers (STM) with additional support from CIBER associates. I was somewhat skeptical about the survey being commissioned by two publishing bodies, but the article’s authors assure readers that:

The views expressed in the Report and in this article are those of the authors alone, based on the data. They do not represent a corporate position, either of the PA or STM. The survey was conducted in a totally unbiased fashion; the research team (CIBER) has no allegiances other than to the data (p33).

The graph in the article is labeled “Figure 7 Reasons for choosing last journal: averages, where 5 = very important, 1 = not at all important (n = 5,513)” not “what authors want”. The actual figures in the graph were –

  • Reputation of the journal – 4.50
  • Readership – 4.21
  • Impact factor – 4.04
  • Speed of publication – 3.89
  • Reputation of editorial board – 3.55
  • Online manuscript submission – 3.43
  • Print and electronic versions – 3.21
  • Permission to post post-print – 2.58
  • Permission to post pre-print – 2.34
  • Permission to retain copyright – 2.31

In my opinion, the reasons why an author may have chosen to publish in a particular journal in the past are not necessarily indicative of what may influence them to publish where in future, especially in this very changable environment of academic publishing. Yet it is still somewhat concerning to see permissions to post pre and post print versions of the article and to retain copyright rate so low.

The question must be asked why the survey results may have shown these preferences. I think it is important to point out that this survey was undertaken in 2005, so does not reflect the most current state of affairs. Additionally, the authors identify the age of the survey respondents as being a potential influencing factor:

More than a third (35.9%) of the respondents are baby boomers, aged 45 or older, and many of their attitudes will have been formed during a long period of relative stability for the academic sector, at a time when the current difficulties facing institutional library budgets and the scholarly communication market were not so evident (p37).

The authors also write:

Many spoke of the influence of external measures, like impact factors, in determining where they feel they have to publish, sometimes to the detriment of their readers (p41).

However, “readership” and “speed of publication” rated almost as highly as “reputation of journal” and “impact factor” – features which I would argue could be delivered quite effectively by OA journals, even relatively new ones.

My final point in relation to this article is that I perceived an implicit bias against OA publishing, despite the authors’ claims to the contrary. This I perceived from the phrasing of questions with a negative slant (for example, “How disruptive is open access?”) and from comments such as this:

There is a significant relationship between previous experience of publishing in an open access environment and researcher’s attitudes to the value they attach to peer review. Authors who have published in an open access journal are more likely to attach lower value to the importance of peer review (p44).

To me, this statement implies that OA journals do not necessarily use peer review or value peer review, which is simply not true.

Notwithstanding my opinions about how the results of the survey are presented, the article is an interesting read. The OAK Law Project has also conducted its own survey, in 2007, on the attitudes and practices of Australian academic authors in relation to the publication and dissemination of their research. The survey report can be accessed here (or by direct link to PDF).

Seminar: Towards a National Information Strategy

“Australia is behind many other advanced countries in establishing institutional frameworks to maximise the flow of government generated information and content” – Venturous Australia: Building Strength in Innovation.

On 19 November 2008, I participated in a free public seminar about the Review of the National Innovation System: Towards a National Information Strategy. The half-day seminar was held in the Hyatt Hotel in Canberra and was hosted by the Department of Innovation, Industry, Science and Research and the QUT Law School.

The speakers at the seminar included Professor Brian Fitzgerald and Professor Anne Fitzgerald, both IP professors in the QUT Law School, and Dr Nicholas Gruen of Lateral Economics. You can view the seminar agenda and speaker bios here.

Professor Brian Fitzgerald spoke about innovation as a force that results from the exchange of ideas. He said that collaboration was a key methodology for innovation. Professor Fitzgerald referred to statements made earlier this month by Finance Minister Lindsay Tanner when he said, “The rise of internet-enabled peer production as a social force necessitates a rethink about how policy and politics is done in Australia”. (Reported in the IT section of The Australian). Professor Fitzgerald spoke about how we need to move from a “gated” model of information distribution and knowledge creation to an access based model. He said, “By sharing IP we can harness a powerful new force – mass collaboration”. He also noted Barack Obama’s technology policy, which promotes openness of the internet and openness in government and research.

Dr Nicholas Gruen gave a compelling talk, very similar to his talk given at the CRC-SI Conference this year (see my earlier post). I like the way he defined innovation as “fragility in the face of serial veto” or “fragility amongst robust hazards”. He also gave his own interpretation of the current financial crisis – “The world has created the perfect storm designed to show us the importance of managing information.” One of Dr Gruen’s examples (there were many) of how small amounts of data or information could be used to vastly improve the lives of Australian citizens was what he called the “windows on workplaces” scheme. The idea is this: increasingly, it is becoming important to Australians to have a work/life balance. There are many workplaces that claim to offer a work/life balance, but in reality many do not. And currently there is no way for people to find out the true state of affairs until they actually start working for the company in question – and usually end up working long hours and missing social/family engagements. Wouldn’t it be easy, Dr Gruen says, to ask people to answer a few simple questions – this could be done when ABS is collecting census data – about whether or not their workplace actually delivers on their work/life balance promises? Then workplaces could be ranked according to what they actually provide – not just what they claim to provide – which would create proper accountability and incentives for workplaces to deliver on their promises. The scheme is simple and cheap, but if successful it could have an enormous impact on the lives of working Australians.

Professor Anne Fitzgerald spoke about policy developments in Australia and around the world on access to and reuse of government data and information. These policy developments are charted in a literature review that Professor Anne Fitzgerald is currently undertaking, entitled, Policies and Principles on Access To and Reuse of Public Sector Information: a review of the literature in Australia and selected jurisdictions. (See my earlier post on this).

I gave a brief overview of the research we have conducted in the area in the QUT Law Faculty. I also spoke about Professor Anne Fitzgerald’s literature review, and our new website about access to and use of public sector information (see my earlier post). My powerpoint presentation can be accessed here.

Overall, it was a very successful and informative seminar.

It was also great to hold the seminar in Canberra. Not only did it enable us to engage with many federal politicians, but we also had the afternoon to look around this lovely city. I visited the National Gallery of Australia, the High Court of Australia and old Parliament House, and had a grand old time before my flight back to Brisbane.

New: literature review and website on access to public sector information

Professor Anne Fitzgerald of the QUT Law Faculty is currently undertaking the massive task of reviewing the literature around policies and principles on access to and reuse of public sector information in Australia and worldwide.

The literature review is divided into chapters according to jurisdiction. This is an ongoing project and Professor Fitzgerald will be releasing the literature review in installments as each chapter is completed.

She has just released Chapter 1: Australia and Chapter 2: New Zealand. Currently, these chapters appear together in PDF form, but I believe they will appear separately later. The literature review so far is extremely comprehensive – chapters 1 and 2 alone comprise 268 pages!

Forthcoming are the remaining chapters – Chapter 3: International; Chapter 4: Europe, UK and Ireland; Chapter 5: United States and Canada; and Chapter 5: Asia.

Currently, the literature review is available in the QUT ePrints Repository (here), but it will soon appear on the new website: http://www.aupsi.org.

http://www.aupsi.org is the website of a new research group with which I am involved – Access to and Use of Public Sector Information (auPSI). auPSI’s mission is to provide a comprehensive web portal that:

  • promotes debate and discussion about the re-use of PSI in Australia and more broadly throughout the world;
  • focuses on developing and implementing an open content licensing model to promote access to and re-use of government information;
  • develops information policy products about delivering access to and encouraging the re-use of PSI;
  • keeps users informed about international developments in this area; and
  • assists governments and policy makers on the development of appropriate policy about the creation, collection, development and dissemination of public sector information.

This mission is built on achieving the following three objectives:

  1. greater efficiency in the reuse of PSI throughout the world;
  2. leading to better quality of outcomes;
  3. for greater impact of publicly funded knowledge within our society.

The literature review will be released in full on this website, as will a forthcoming article by Neale Hooper, Timothy Beale, Professor Anne Fitzgerald and Professor Brian Fitzgerald entitled, “The use of Creative Commons licensing to enable open access to public sector information and publicly funded research results – an overview of recent Australian developments”. Keep your eyes peeled.

More on the Brisbane Declaration

This is what Professor Arthur Sale of the University of Tasmania, one of the chief architects of the Brisbane Declaration, has written about it:

…May I tease out a few strands of the Brisbane Declaration for
readers of the list, as a person who was at the OAR Conference in
Brisbane.

1. The Declaration was adopted on the voices at the Conference,
revised in line with comments, and then participants were asked to put
their names to it post-conference. It represents an overwhelming
consensus of the active members of the repository community in
Australia.

2. The Conference wanted a succinct statement that could be used to
explain to senior university administrators, ministers, and the public
as to what Australia should do about making its research accessible.
It is not a policy, as it does not mention any of the exceptions and
legalisms that are inevitably needed in a formal policy.

3. The Conference wanted to support the two Australian Ministers with
responsibility for Innovation, Science and Health in their moves to
make open access mandatory for all Australian-funded research.

4. Note in passing that the Declaration is not restricted to
peer-reviewed articles, but looks forward to sharing of research data
and knowledge (in the humanities and arts).

5. At the same time, it was widely recognized that publishers’ pdfs
(“Versions of Record”) were not the preferred version of an article to
hold in a repository, primarily because a pdf is a print-based concept
which loses a lot of convenience and information for harvesting, but
also in recognition of the formatting work of journal editors (which
should never change the essence of an article). The Declaration
explicitly make it clear that it is the final draft (“Accepted
Manuscript”) which is preferred. The “Version of Record” remains the
citable object.

6. The Declaration also endorses author self-archiving of the final
draft at the time of acceptance, implying the ID/OA policy (Immediate
Deposit, OA when possible).

While the Brisbane Declaration is aimed squarely at Australian
research, I believe that it offers a model for other countries. It
does not talk in pieties, but in terms of action. It is capable of
implementation in one year throughout Australia. Point 1 is written so
as to include citizens from anywhere in the world, in the hope of
reciprocity. The only important thing missing is a timescale, and
that’s because we believe Australia stands at a cusp..

What are the chances of a matching declaration in other countries?

Arthur Sale
University of Tasmania

This is what Peter Suber had to say on his blog:

This is not the first call for OA to publicly-funded research. But I particularly like the way it links that call to (1) OA repositories at universities, (2) national research monitoring programs, like the HERDC, and (3) the value of early deposits. Kudos to all involved.

Just announced: Brisbane Declaration [on open access in Australia]

Following the conference on Open Access and Research held in September in Australia, and hosted by Queensland University of Technology, the following statement was developed and has the endorsement of over sixty participants.

Brisbane Declaration

Preamble
The participants recognise Open Access as a strategic enabling activity, on which research and inquiry will rely at international, national, university, group and individual levels.

Strategies
Therefore the participants resolve the following as a summary of the basic strategies that Australia must adopt:

  1. Every citizen should have free open access to publicly funded research, data and knowledge.
  2. Every Australian university should have access to a digital repository to store its research outputs for this purpose.
  3. As a minimum, this repository should contain all materials reported in the Higher Education Research Data Collection (HERDC).
  4. The deposit of materials should take place as soon as possible, and in the case of published research articles should be of the author’s final draft at the time of acceptance so as to maximize open access to the material.

Brisbane, September, 2008

OAR conference notes – Andrew Treloar

Dr Andrew Treloar – ANDS Establishment Project

Blue print for ANDS = Towards the Australian Data Commons (TADC) – developed during 2007 by ANDS Technical Working Group

TADC: Why data? Why now? – increasing data-intensive research; almost all data is now born digital; “Consequently, increasingly effort and therefore funding will necessarily be diverted to data and data management over time”

TADC: Role of data federations – with more data online, more can be done; increasing focus on cross-disciplinary science

Changing Data, Changing Research – e.g. Hubble data has to be released 6 months after creation

ANDS Goal = to deliver greater access, easier and more effective data use and reuse

ANDS Implementation assumptions:

  • ANDS doesn’t have enough money to fund storage, and so is predicated on institutionally supported solutions
  • Not all data shared by ANDS will be open
  • ANDS aims to leverage existing activity, and coordinate/fund new activity
  • ANDS will only start to build the Australian Data Commons
  • ANDS governance and management arrangements are sized for the current funding

Realising the goal – need to:

  • Seed the commons by connecting existing stores
  • Increase (human) capability across the sector in data management and integration

ANDS structure = four programs:

  1. Developing Frameworks (Monash) – about policies, national understandings of data management, and research intensive organisations = assisting OA by encouraging moves in favour of discipline-acceptable default data sharing practices
  2. Providing Utilities (ANU) – Services Roadmap, national discovery service, collection registry, persistent identifier minting and management = assisting OA by improving discoverability particularly across disciplines (ISO2146)
  3. Seeding the Commons (Monash) – recruit data into the research data commons = assisting OA by increasing the amount of content available, much of it (hopefully) OA
  4. Building Capabilities (ANU) – improving human capability for research data management and research access to data – esp. early career researchers teaching them good data management practices from the beginning = assisting OA by advocating to researchers for changed practices

OAR conference notes – Maarten Wilbers

Session Six: A Legal Framework Supporting Open Access

Maarten Wilbers – Deputy Legal Counsel, CERN

Large Hadron Collider (LHC) – switched on 10 September

SCOAP = Sponsoring Consortium for Open Access Publishing in particle physics

Fundamental research mandate in particle physics – in a good place to move to full OA publishing of their scientific data and publications – this might be the “tipping point” for scientists in other disciplines

CERN founded in early 50s – OA in high energy physics was “in the cards” from the beginning…because OA is so logical

If you walk around CERN you can see the enormous tools constructed from public funds to help scientists gain greater understanding of small particles – the case for OA can almost be made without a word being spoken

OA in publishing is the future

CERN’s 1954 Convention has laid the foundation for a culture of openness in the dissemination of the organisations scientific work: CERN must perform fundamental research for non-military purpose and make the results of its work generally available

This requirement of openness has helped in the shaping of a string of sequential milestones:

  • Scientific collaboration across national (and political) boundaries;
  • Preprint culture and peer review;
  • World Wide Web;
  • Computing Grid and Open Source software;
  • And most recently: promotion of OA publishing.

The legal frameworks governing these activities are supportive rather than restrictive in nature and adapted to collaboration involving multiple participants. Legal issues mostly concern copyright and are generally uncontroversial.

OA is a logical application of the web.

SCOAP aims to convert high quality particle physics journals to OA

Scientific experiments at CERN reflect CERN’s requirement of openness

Collaboration usually laid down in MOU – IPR vested in creating party, wide licensing between all parties involved

Publication of CERN’s work: particle physics pioneered the pre-print culture in the 1950s, scientific manuscripts circulated between scientists for peer review before publication

Main milestone was the creation of the World Wide Web at CERN by Tim Berners Lee

1992 – CERN released the WWW software in the public domain – “CERN relinquishes all intellectual property rights to this code, both source and binary form and permission is granted for anyone to use, duplicate, modify and redistribute it”

Why OA (from CERN’s perspective)?

  • High quality journals, offering peer-review, are the [High Energy Physics] HEP’s community’s “interface with officialdom”;
  • Depending on definition of HEP, between 5000 and 7000 HEP articles published each year, 80% in 6 leading journals by 4 publishers
  • Subscription prices make the current model unsustainable. Change is required
  • HEP is a global undertaking and OA solutions should reflect this.

CERN’s potential solutions for OA publishing:

  • Articles free to be read for all
  • Tender process will result in price of article; linked to quality
  • ….

Legal issues – keep things as simple as possible!

A strong example if OA publishing – the design of LHC published in OA journal (Journal of Instrumentation..?) just recently

OAR conference notes – Richard Jefferson

Richard Jefferson – Opening the innovation ecology

  • Public good is not an abstract

Yochai Benkler Stack: Physical-Code-Content-Knowledge

We should ask the question: if we are successful in that everything is made OA – what then? We must make sure that the knowledge we generate will enable people to act on this knowledge and use it for benefit

The post-Yochai Benkler Stack = Physical-Code-Content-Knowledge; Capability to Act

We now have a system that is so opaque and has embedded in it intrinsic “inpermissibility” that it is not useful and capability to act on it is restrained

CAMBIA – focused on innovation system reform

BiOS Initiative – launched early 2005 with an article in Nature, biology open source (biological innovation for open society);

Patent system – actually a system based on open disclosure
This is not about rhetoric – it is about the practical goal of efficiency

OS – open source; open science; open society (need inclusiveness)

Used example of “golden rice” – which was once “poster child” of biological engineering – development of rice for third world areas where there was vitamin A deficiency in food so children were going blind, but the result used so many different products and processes that were patented that eventually the golden rice was not able to go ahead

Patent Lens – develop harmonized structure and infrastructure for searching patents; embedded metadata about patents; web 2.0 quality decision support about patents;

Efficiency = minimise tainting of product from incorporating other people’s IP (usually unknowingly) and maximise capacity for adoption – can try to do this by improving people’s knowledge about what IP is incorporate and enhance decision-maker’s ability to make good decisions for public good

Persistent, pervasive, jurisdiction agnostic activity = platform for community collaboration and transparency

Proper parsing, visualization and decision-making

Initiative for Open Innovation – increasing the equity, efficiency and effectiveness of science-enabled innovation for public good

Defining open innovation:
Open = transparent
Open = inclusive

Web based tools for scientists funding agencies, public sector and innovation enterprises to mine the patent world

Build patent lens into Nature and PLoS biology – to show, where readers are reading an article about a particular invention, whether the author has filed a patent on this

OAR conference notes – John Wilbanks

John Wilbanks (of Science Commons) – The Future of Knowledge

Knowledge is a set of building blocks – value is not that much until you start to put it together with other ideas and knowledge

Ideas and knowledge want to be connected

2 futures – we get to choose which we build – (1) only the people who have money have access to the knowledge (2) one in which there is an open network

(1) Knowledge brings revolutions

The past of knowledge = “Human-scale knowledge” – the scholarly canon (journals) – knowledge was human-organised and human-structures
How did this knowledge bring a revolution?

Moving to a world where knowledge acquisition is faster, smaller, cheaper and more robotic. Moving from a world where humans generate the scale of knowledge to a world where machines generate the scale

We have an implicit network that is already there for knowledge, but because we are generating it so quickly and on such a large scales, we are coming up against barriers – legal (copyright, DRM), technical (still use paper based formats online that cannot be searched by machines – i.e. PDF), business (publishers make money from closed access and we don’t yet know how they can make money or build business models around open access), social (scientists still get rewarded for being closed) – that we never encountered before

Over-atomised knowledge – smaller and smaller questions – primary output is a paper – John argues that these are not the primary vehicles for knowledge in a digital world

Incremental advances via technology – no big risks to achieve great advances anymore because you don’t get rewarded for making these risks, in fact you come up against huge legal barriers that prevent you using other research to take these risks

(2) We need to make systemic changes that connect knowledge

e.g. “the commons” – a number of different meanings: (1) land we hold in common e.g. public footpath; right to do research – rights of way across private property; (2) no copyright – things we all own

we are coming from a world where it was hard to be a creator and disseminate your work. We are not in that world anymore. There is now a disconnect between the copyright laws that Disney wants and the copyright laws that we as individual creators want. This is where the commons can make a systemic change.

Systemic change about the way we think about how we share knowledge – not just paper-based formats in a digital form – forces us to use technologies that are immediately outdated – what kinds of technology can we used instead? – a network of devices (layers: physical; code; content – there has been many developments of openness in these layers, but we have also seen an imposition of control in these layers (copyright)) – do we need new layers? Knowledge layers; graph layers etc. Info atomization kind of forces our hand to do this. Knowledge accessed needs to support the questions being answered (eg – when you type a query into Google – it tells you to read thousands of papers – this is not the ideal answer)

Copyright is incompatible with ideas connecting to each other.

(3) The disruptive force of connected knowledge

“guild” culture (as in historical sense of guilds, where the crown put limits on people not in the guild from weaving etc)

the way we do science actively discriminates against crowds and the wisdom of crowds

knowledge can be democratized: programming; creativity; buying and selling
it is easy, cheap and free

there are no office superstores for science; there are no internet marketplaces for science…but they are coming

destroying a guild culture of knowledge…what will come after it?

Creating a network culture for knowledge

• are we going to “watch” the knowledge like tv, or do something with it? – in the future of knowledge, we should do stuff with our knowledge rather than just consume it

Commentators: Dr Terry Cutler and Prof Mary O’Kane

Dr Cutler –

proud of the focus in Innovation Review on open access; however, first an apology and explanation – there is a difference between web version and print version – both supposed to be released under CC but were not (copyright assertion for Dr Cutler instead) – now attempting to have this rectified for the web version.

Key assertions from the report = about investment in people; global integration; flows of information and the freedoms to innovate

2% challenge of Australia – at best, we have a 2% share of global knowledge generation, and we don’t pay enough attention to the other 98% and how we access this – as a country we will always have an interest in an open network because we derive the most benefit from it

flows of information = communications. Communications theory and legal principles around communications were always based on connectivity. Open access is really just an extension of these principles.

Challenge – who really “owns” this problem of driving solutions (particularly at a government level)? – we need the government to address accessibility issues and articulate a national innovation policy – someone needs to take responsibility for this at the centre of government

Too much emphasis on “protectable” knowledge and not enough on informal networks and social networks that underpins the generation of an innovative community – need to open up access to that tacit knowledge and put social networks back into science and technology

Professor Mary O’Kane –

(1) is the future that John is talking about possible? How do we get to participatory science?

Can Australia lead this move into a participatory culture? We need to change the incentives for scientists. We need to change the social culture and drivers generally. So what are the drivers? Usually the intrinsic values are strongest (i.e. solving problems) not money. So how can we celebrate these intrinsic values? Across the university sector we need to reward people for open publishing.

(2) Issues that arise if you start to get the participatory culture going?

Problems that arise when you use the networks that have been built automatically, is that it is very hard to “probe the node” and know what is in the network. But does the human need to know or can we leave this to the machine? Do we need to know the knowledge? And at what level?

Questions/comments

[John: we need to lower the cost of failure to increase the rate of innovation (i.e. in the context of start-ups)]

(1) Richard Jefferson: the power of the guild is building value, trust and quality control and we shouldn’t erode that

John (response): we don’t need to get rid of guild completely, but we need to build another layer where we can build on the knowledge of everyone – but we can still have trademarks etc to control quality

Mary (response): I’ve always wondered why we don’t use the internet more for structured, controlled discussion about things – there is no reason why we couldn’t and that would also help control quality – by generating discussion

(2) Roger Clarke – referring to the “tacit knowledge problem” seems to assume that the way the human mind works can be reduced to a computer-based system and the problem is that the mind does have a generic model that we can all grasp but we just haven’t transferred it over to the computer yet. But everyone thinks differently.

John (response): I don’t think we can actually encode how the mind works, but we need to make information available. That is the importance of openness – you need to be able to read, criticize and comment on what I put up, and that is how we see the reflection of the many different minds at work. Getting it into the computer means we can start accessing that information and competing on it using our brains rather than competing on our access to computers.