New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 28 October 2013

229

Citation

(2013), "New & Noteworthy", Library Hi Tech News, Vol. 30 No. 9. https://doi.org/10.1108/LHTN-10-2013-0058

Publisher

:

Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 30, Issue 9

Identifying a Country’s National Presence in the Published Record: New Report

Written by OCLC Research Scientist Brian Lavoie, Not Scotch but Rum: The Scope and Diffusion of the Scottish Presence in the Published Record uses WorldCat bibliographic and holdings data to identify, characterize, and track the diffusion of the Scottish presence in the published record. The report describes a repurposable, machine-processing-based methodology for identifying a country’s national presence, including materials published in the country, published by the country’s nationals, or published about the country. Scotland is employed as a case study to illustrate the methodology’s application: the salient features of the nearly two million distinct publications in the Scottish presence are discussed, along with the diffusion of these materials around the world.

Key findings from the report include:

A national presence in the published record is composed of materials published in or about a country or by its people.

* A national presence in the published record is identifiable in library data using mainly automated processing.

* The Scottish presence in the published record includes nearly two million distinct publications.

* The Scottish presence in the published record is widely held in library collections around the world.

* Scottish authors are especially influential in global diffusion of Scottish presence in published record.

* Treasure Island may be the most globally influential Scottish work in the published record.

* Tartan Noir and works about/set in Scotland are key ways Scotland is manifested in contemporary works.

* The national presence in the published record is a useful concept for libraries and scholars.

This report will be of interest to libraries responsible for collecting and preserving their country’s contribution to the published record, as well as to scholars interested in exploring how countries manifest and transmit their cultural heritage through the published record.

View related links and download the report: http://www.oclc.org/research/news/2013/09-17.html

“Out of Cite, Out of Mind”: Current Practices and Policies on Data Citation

The US Committee on Data for Science and Technology, CODATA, and the Board on Research Data and Information (BRDI) is pleased to announce the publication of a new report: Out of Cite, Out of Mind: The Current State of Practice, Policy, and Technology for the Citation of Data. The report was authored by the CODATA-ICSTI Task Group on Data Citation Standards and Practices and edited by Yvonne M. Socha. The project was directed by the staff of the US CODATA/BRDI.

The report was published by the CODATA Data Science Journal on 13 September 2013 and is available freely and openly online. The document is available electronically only and was not published in print form.

The report discusses the current state of data citation policies and practices, its supporting infrastructure, a set of guiding principles for implementing data citation, challenges to implementation of good data citation practices, and open research questions. This is the second report on data citation issues that has been published by the collaboration of the CODATA-ICSTI Task Group and the US CODATA/BRDI. The first report, For AttributionDeveloping Data Attribution and Citation Practices and Standards (2012), is also freely and openly available from the National Academies Press online.

“Out of Cite, Out of Mind”: http://www.jstage.jst.go.jp/article/dsj/12/0/12_OSOM13-043/_article

For Attribution – Developing Data Attribution and Citation Practices and Standards: http://www.nap.edu/catalog.php?record_id=13564

Full, open proceedings available for DC-2013 “Linking to the Future”

The collocated conferences for the DCMI International Conference on Dublin Core and Metadata Applications (DC-2013) and the 10th International Conference on Preservation of Digital Objects (iPRES-2013) in Lisbon, September 2-6, attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events.

The peer-reviewed papers and presentations are now available on the conference web site presentation page. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following:

1. Tutorials:

* Ivan Herman: “Introduction to linked open data (LOD)”.

* Steven Miller: “Introduction to ontology concepts and terminology”.

* Kai Eckert: “Metadata provenance”.

* Daniel Garjio: “The W3C provenance ontology”.

2. Special sessions:

* Thomas Baker, Karen Coyle: “Application profiles as an alternative to OWL ontologies”.

* Thomas Baker, Bernard Vatant, Pierre-Yves Vandenbussche: “Long-term preservation and governance of RDF vocabularies (W3C Sponsored)”.

* Gildas Illien: “Data enrichment and transformation in the LOD context: poor & popular vs rich & Lonely – Can’t we achieve both?”

* Richard Wallis: “Why Schema.org?”

3. Workshops:

Diane Ileana Hillmann: vocabulary day.

* Jane Greenberg: Cyber Infrastructure and Metadata Protocols (CAMP)-4-DATA.

Proceedings URL: http://dcevents.dublincore.org/index.php/IntConf/dc-2013/schedConf/presentations

Conference home URL (for session descriptions): http://dcevents.dublincore.org/index.php/IntConf/dc-2013/

BISG publishes new edition of Best Practices for Product Metadata

The Book Industry Study Group (BISG) has announced the publication of the revised edition of Best Practices for Product Metadata: Guide for North American Data Senders and Recipients, available for immediate download.

BISG’s Product Metadata Best Practices was first published in 2005 and provided a clear roadmap for accurate data throughout the supply chain in order to increase efficiency between trading partners and improve discoverability.

Since then, metadata has become both more important and more complex. It touches every corner of the ecosystem, from editorial to operations to marketing upstream, and from buying to merchandising to sales downstream. Every player in the ecosystem relies on accurate transmittal or receipt of metadata to ensure that search results deliver accurate information to consumers. The pressure on metadata as a crucial element of book discovery and sales, a shared goal for all involved, will only increase.

The revision outlines practices that address the sometimes competing interests for both senders and recipients. It reflects the current state of digital workflows and will be updated frequently to keep them relevant to the changing business and technology environment. It combines recommendations for both data senders and receivers in one handbook. It includes better support for digital products; the use of marketing data points for increased discovery; usage tips and links for ONIX 3.0; and covers both Canadian and US markets.

Best Practices for Product Metadata: Guide for North American Data Senders and Recipients was written and compiled for BISG and BookNet Canada (BNC) by the BISG Metadata Committee. BISG and BNC thank EDItEUR limited for the information on ONIX data elements and their expertise in helping prepare this report.

Download Best Practices for Product Metadata: http://www.bisg.org/docs/MetadataBP-2013.pdf

BISG Metadata Committee: http://www.bisg.org/committee-1-14-metadata-committee.php

NISO and UKSG release draft revised recommendations for knowledge bases and related tools (KBART)

The National Information Standards Organization (NISO) and UKSG have announced the release of a draft for public comment of a revision to the knowledge bases and related tools (KBART) recommended practice. Issued in 2010, the original recommended practice provided all parties in the information supply chain with straightforward guidance about metadata formatting to ensure the exchange of accurate metadata between content providers and knowledge base developers. Building on the initial recommendations, the draft revision focuses on the more granular, complex issues that cause problems in metadata supply, including consortia-specific metadata transfer, metadata transfer for Open Access publications, and metadata transfer for e-books and conference proceedings.

“Since the first recommended practice was issued, over 50 publishers and content providers have endorsed KBART and demonstrated their commitment to good quality metadata provision”, states Magaly Bascones, Data Manager at Jisc Collections and Co-chair of the KBART working group. “The endorsement process requires submission of a sample file and verification by the KBART working group. With this endorsement, users can be assured that the providers’ metadata is trusted and has the required level of granularity without the burdensome task of title-by-title checking”.

“The experience of the endorsing publishers and feedback from a survey of libraries and consortia identified the areas of focus for this expanded KBART revision”, explains Chad Hutchens, Head of Digital Collections and Digital Resources Librarian and Co-chair of the KBART working group. “Following the public comment period, the KBART working group will make any needed revisions and finalize the recommendations for publication. Also available is a KBART information Hub on the NISO website that provides supporting materials about KBART, including the KBART Glossary, endorsement information, a registry of knowledge base supply chain contacts, and background information on OpenURL and knowledge bases”.

“Following the publication of the KBART Phase II recommended practice, the project will transfer to standing committee status within NISO”, states Nettie Lagace, NISO’s Associate Director for Programs. “This Committee will be responsible for managing the endorsement process, providing ongoing education and promotion of KBART, and maintaining the information Hub”.

The KBART Phase II draft is open for public comment through October 4, 2013.

To download the draft or submit online comments, visit the KBART Information Hub at: http://www.niso.org/workrooms/kbart

Figshare for institutions: new solution for sharing, self-archiving research data

Figshare has announced the launch of Figshare for institutions – a simple and cost-effective software solution for academic and higher education establishments to both securely host and make publicly available its academic research outputs. Figshare allows academic institutions to publish, share and get credit for their research data, hosting videos, datasets, posters, figures and papers in a cost-effective way.

Institutions can choose to make as little or as much of its research publicly available as its funding mandates require, ensuring that they can take control of all their research outputs – to make them citable, searchable and discoverable. Academic publishers Nature, Public Library of Science (PLOS) and F1000 currently trust Figshare to bring their published articles to life – either by making the entire supplementary article data open to their readers or by providing the article content.

With new funder mandates requiring institutions to provide self-archiving and to make increasing amounts of their research outputs publicly available, Figshare allows institutions to easily aggregate research at both the departmental and institutional level, automatically providing a self-populating institutional repository with reporting capabilities. Researchers can quickly upload their data and easily retrieve research and data with simple file curation. These research outputs are then just one click away from being made openly and persistently available if mandated by the institutional funder. The uploaded research is also citable and trackable via a digital object identifier (DOI) and detailed reporting metrics are available for the institution, to track the interest in its publicly available research.

Mark Hahnel, founder of Figshare explains, “Academics can struggle to organise their research outputs, as I once did. Figshare for institutions integrates into their existing workflow so that their data management requirements are complied with subconsciously. The institutions also benefit by seeing the full reputational impact of all of the research they generate, a huge step up from the silo-ed system that exists within many research organisations at the moment”.

Included:

* Large amounts of secure, private storage plus unlimited public space.

* Simple, institution-wide management and monitoring of all research outputs for institution staff with subject categorisation per department.

* Access-controlled team sharing and collaborative spaces with the ability to add notes and comments to files.

* An institutional dashboard with detailed metrics on the impact of publicly available data.

* All research outputs can be made citable, visualisable, embeddable and trackable with one click.

* The ability to push research to any internal repository.

* Institution-wide compliance with open data requirements of funding bodies.

* Dedicated support team.

Persistent and trackable: all publicly available outputs are citable and trackable with a unique DOI. The impact of these objects can then be tracked and Figshare offers institutions cumulative metrics, such as citations and downloads, as well as newer measures such as altmetrics. Figshare and the CLOCKSS Archive have partnered to preserve the content publically available on Figshare in CLOCKSS’s geographically and geopolitically distributed network of redundant archive nodes, located at 12 major research libraries around the world.

Secure and controlled access: Figshare is hosted using Amazon web services to ensure the highest level of security and stability for research data. Amazon S4 stores multiple redundant copies of information so you do not have to worry about ever losing your master copy. The security and persistence of all the files hosted on Figshare makes it easy to prevent plagiarism of the research data held there, as all uploads are time-stamped. The private collaborative spaces feature allows users to keep their research visible only to themselves, or available to specified collaborators or their PI, ensuring that files can be easily located and shared.

For more information: http://figshare.com

European landscape study of research data management

The European landscape study of research data management offers an overview of how to effectively support researchers in their data management. It looks at interventions by funding agencies, research institutions, national bodies and publishers across the European Union member states. The report also makes recommendations that organisations can adopt to help their researchers.

The European landscape study of research data management, carried out by SURF, is part of the European SIM4RDM project (support infrastructure models for research data management). The aim of this project is to equip researchers with the knowledge, skills and support infrastructures they need to adopt good research data management practices. The landscape study is the first step towards a “cookbook” for implementing research data management. Together with the project partners, SURF will work on case studies that will be the proof of the pudding.

The landscape study offers recommendations for all stakeholders. These recommendations are being incorporated in an European intervention and evaluation framework for Horizon 2020. An overview of recommendations follows:

* National research organisations could take the lead in drafting a national code of conduct which encourages the creation and use of data management plans, suggest and supply appropriate tooling and take an active role in data citation practices.

* Funding bodies should encourage researchers by offering clear instructions to create a data management plan at the level of the project proposal and they can designate centres to store research data. In the Netherlands both NWO and ZonMw are working on a policy, instructions and support for researchers.

* The number of research institutions using a data management policy is growing. Many institutions offer an infrastructure to store, manage and access research data comprising a variety of file storage and library systems. In The Netherlands more and more universities are setting up data management support services.

* Interviews with researchers show that policies should primarily cover roles and responsibilities for managing data, mechanisms for storage, backup, registration, deposit and retention of research data, access to re-use of data, open accessibility and availability of data and long term preservation and curation.

* Not many publishers have a policy in place yet. The policies that do exist require links to the data underlying the article or to make entire datasets available when submitting the article, but not to keep them up to date. A dialogue should be established with publishers and publishers’ associations about the definition of data policies. Possible elements are persistent identifiers for citation of data and requirements of reliability for repositories in which data are to be deposited.

The setting for research data management is broader than initially anticipated. Other stakeholders need to be brought in as well, e.g. editorial boards of scientific journals, data centres and infrastructure providers. Research societies may intervene with the development of common practices. Infrastructure providers could intervene with common data formats for preservation and storage, tools and utilities. Policies from funders, institutes and editorial boards may influence researchers to use the principle of “share and share alike”.

The landscape report is the first step in exploring stakeholders involved with research data management practice and support infrastructures. With the overview of used and planned interventions, the EU will determine models for coordinating such interventions so as to ensure maximum impact. And they will implement and pilot an intervention model and evaluation framework. The EU-partners will build international consensus on long-term strategy and policy in the area of research data.

European landscape study of research data management: http://tinyurl.com/SIM4RDM-landscape-report

Overview of recommendations from the study: http://www.surf.nl/nl/publicaties/Documents/SIM4RDM-recommendations_flyerDEF.pdf

Alexander grossmann on the state of Open Access: insights from the scholarly publishing industry

Richard Poynder (http://poynder.blogspot.com/), on his blog Open and Shut?, recently posted a new Q&A in a series exploring the current state of Open Access. This one is with Alexander Grossmann, who earlier this year took up a post as Professor of publishing management at the Leipzig University of Applied Sciences. To do so grossmann gave up a job as Vice President at the scholarly publisher De Gruyter, returning to research after ten years in the publishing industry. In that time he also served as Managing Director at Springer-Verlag GmbH in Vienna and as Director of physics publishing at Wiley.

Grossmann has also recently co-founded an OA venture called ScienceOpen.

Some excerpts from the Q&A.

“I have the impression that there is no publishing house which is either able or willing to consider the rigorous change in their business models which would be required to actively pursue an Open Access publishing concept. However, the publishers are certainly aware of the PR value of Open Access and many are taking steps in this direction by founding new gold Open Access journals, offering hybrid models or acquiring OA companies. All attractive trimmings as long as the profit margins from subscription-based journals are not threatened”.

Active lobbying against OA takes place in parallel to these cosmetic offerings.

“I have been involved in many internal meetings with publishers since the early 2000s in which copyright issues, embargo periods, or self-archiving were heavily discussed. The Science/Technology/Medicine (STM) sector has always been particularly demanding, and even within a publishing house one always remains an advocate for one’s authors – physicists were early proponents of Open Access with the ArXiv preprint database for example. I always tried to sensitize my colleagues to these demands – only a fair and transparent handling of access issues would result in a positive and persistent settlement between authors and publishers. But at complete variance to my earlier expectations, publishers continue to tighten their rules, for instance for self-archiving and embargoing. The yearly drop in subscription numbers has everyone on edge and the occasional experiments in Open Access are not designed to save the bottom line”.

“The introduction of ‘Green OA’ should be considered simply as the first response of the publishing industry to the new legal requirements or regulations introduced by funding agencies such as the National Institutions of Health (NIH) in the US. When it was first introduced I expected Green OA to be an intermediate concept to be replaced by a new business and publishing concept in general. At variance to this expectation, the concept has become established as something which shall exist forever. Certainly Green OA cannot be considered as meeting researchers’ demand for an easy way to immediately make their research freely available to everybody who is interested in accessing the results”.

“[I]t is not sufficient to continue to launch single new OA journals in individual scientific disciplines. Rather, both the visibility and acceptance of OA concepts among the scholarly community worldwide needs to be increased.

The development of a platform concept similar to ScienceOpen for many scholarly disciplines may be one approach, and that is one of the reasons why I launched the project”.

“The OA movement should uniformly focus on supporting libraries to develop strategies to modify their budget policies. This should result in having more money available to be spent on OA at their institutions. At least it should be possible to reallocate a part of the present budget which is spent on big deals for subscription journals towards OA in order to meet the costs of Gold OA publications. As long as libraries are caught in the big deals and traditional subscription models, we all have less chance to move forward with OA. Although this task sounds of a technical nature, it seems to me to be the prerequisite to providing the necessary budget for more OA publishing today and in the future”.

“The present business models of subscription-based publishing forces librarians to spend most of their budget or all of their budget on package deals with the major publishers. Just to illustrate the situation: for some libraries, in particular smaller libraries which cannot afford all the journals they need, publishers offer to take their whole budget to get access to the complete list of that publisher. As a result, no money is left to buy the publications of other publishing houses, or other content resources. However, those libraries accept that situation as the lesser evil”.

“It is apparent that such a situation and such a business practice is totally unacceptable in terms of providing researchers and their institutions with the freedom and flexibility to access the information they need for their work, and to make the outcome of that research available for everybody worldwide working on the same problem. I am confident that it simply requires one or a few key scholarly institutions to make a significant change in how their libraries acquire and fund their research content”.

The complete Q&A with Alexander Grossmann can be read at Open and Shut?: http://poynder.blogspot.co.uk/2013/08/alexander-grossmann-on-state-of-open.html

ScienceOpen: http://www.scienceopen.com/

OPEN ACCESS Media importer project is finalist for PLOS ASAP Award

PLOS has announced the six finalists for the Accelerating Science Award Program (ASAP). The program recognizes the use of scientific research, published through Open Access, that has led to innovations benefiting society. Major sponsors include the Wellcome Trust and Google. Three top awards of $US30,000 each will be announced on October 21 in Washington, DC at an Open Access Week kickoff event hosted by the Scholarly Publishing and Academic Resources Coalition (SPARC) and the World Bank.

A recent post on the PLOS “Mind the Brain” blog takes a closer look at one of the projects among the six finalists: Visualizing Complex Science (Daniel Mietchen, PhD, Raphael Wimmer and Nils Dagsson Moskopp). Their project explores an opportunity in exploiting Open Access literature to illustrate articles in Wikipedia. From the blog.

Many scientific articles have a “supplementary” materials section, which can be rich in multimedia, but these artifacts may not as easy to find as those that make up the main body of scientific manuscripts. What Daniel, Raphael and Nils did is maximise the impact of those research outputs by putting them in a place where they can be found, explored and reused by scientists and non-scientists alike.

They developed a tool called Open Access Media importer (OAMI) that searches for multimedia files through Open Access articles in PubMed Central and uploads them to Wikimedia. This tool exemplifies the added value of papers published under open access using a libre copyright licence such as CC-BY. Not only are the articles available to read, but also they can be repurposed in other contexts. The files that the OAMI bot uploaded now illustrate more than 200 English Wikipedia pages, and many more in other languages.

Q. How did you get started with this project?

Daniel Mietchen: My PhD was on magnetic resonance imaging, which primed me to work with videos, and my first post doc was on music perception, which naturally involved a lot of audio. Both made me aware of all the audiovisual material that was hidden in the supplements of scholarly articles, and I found that the exposure of that part of the literature left much to be desired. For instance, every video site on the web provides thumbnails or other forms of preview of video content, but back then, no scholarly publisher exposed video content this way. Wikimedia Commons did. I also noticed that Wikipedia articles on scientific topics were rarely illustrated with multimedia. So the two fit well together. Nils, Raphael and I met online, and then sent our first funding proposal in 2011 in order to automate the import of supplementary audio and video files from scholarly articles into Wikimedia commons.

We chose to start with PubMed central. It is one of the largest repositories of scholarly publications, many of which have supplementary materials, and it has an API we could use.

Q. How far have you come?

DM: we have now imported basically all audio and video materials from suitably licensed articles available from PubMed, save a few where there were technical difficulties with file conversion or upload. Initially, we did not know how many files this would be, and had roughly estimated (there is no easy way to search for supplementary video or audio files) the number at somewhere between 5,000 and 10,000 back in 2011. The bot now adds several hundred files from newly published articles every month and passed 14,000 uploads to Wikimedia Commons earlier this week. So if you are going to publish multimedia with a suitably licensed paper in a journal indexed in PubMed Central, you – and anyone else – can find it on commons shortly thereafter.

Read the full interview at: http://blogs.plos.org/mindthebrain/2013/10/01/asap-awards-interview-with-daniel-mietchen/

OAMI on GitHub: http://https://github.com/erlehmann/open-access-media-importer

IFLA Library Launches; New Repository for IFLA’s Conference Content, Documents

In July IFLA President Ingrid Parent launched the IFLA Library (library.ifla.org), a repository for IFLA World Library and Information Congress (WLIC) papers and, in future, other IFLA publications. “This improved accessibility to IFLA’s publishing, through the IFLA Library, will bring real benefits to participants at the IFLA WLIC, to IFLA members, and to library and information professionals worldwide. I congratulate all those who worked hard to implement this project and I look forward to the further enhancements to come over the next year”, said Parent.

Genevieve Clavel-Merrin, Chair of the Governing Board’s Repository Working Group welcomed the launch: “The project Working Group members are all delighted to see this become a reality. The IFLA Library allows IFLA to share, manage, and archive its documents, and I look forward to seeing it grow and develop”.

The IFLA Library is part of IFLA’s digital content programme key initiative, and is designed to provide a repository to collect together IFLA’s own publications for ease of location, search, display and preservation. IFLA selected EPrints services to build and host the repository.

Over 200 papers from “Future libraries: infinite possibilities”, the 2013 WLIC, held in Singapore 18 – 22 August, are now available. Presentation slides are also available where permission is granted by the speaker.

In line with IFLA’s Open Access and Copyright Policy, authors of papers accepted for the congress have assigned a Creative Commons Attribution 3.0 Unported licence (CC BY 3.0) to their work. This licence enables IFLA to make copies of the papers available in its repository and permits the widest possible dissemination and use of the papers.

All content will be discoverable via Google and Google Scholar and development of the IFLA Library platform will continue to enhance the search, browse and help facilities for users.

The IFLA Library is available from the IFLA home page and is also linked from the WLIC programme from where congress participants can search, read, browse, and download papers.

Read/download papers from WLIC 2013, “Future libraries: infinite possibilities”: http://library.ifla.org/view/conferences/2013/

IFLA Library: http://library.ifla.org/

SIIA releases guide on the use of open educational resources

The Software & Information Industry Association (SIIA), the principal trade association for the software and digital content industry, has released the “Guide on the use of open educational resources (OER) in K-12 and postsecondary education”. This guide provides a framework for understanding OER, and it examines development and implementation costs, current business models, government and philanthropy’s role, and other considerations around the use of OER.

The guide includes the following:

1. OER definition, including full explanations of related copyright and licensing issues.

2. Total cost of development/ownership of instructional materials, including implications for those creating and implementing OER.

3. Business/funding models being used to develop and support OER by content developers and aggregators, both for-profit and non-profit.

4. Government initiatives, including a sampling of key federal, state and international OER policies and grants.

5. OER frequently asked questions.

The guide was developed under the direction of the SIIA OER working group. It was authored by independent consultants Sue Collins of CollinsConsults and Peter Levy of Learning in Motion. Their knowledge, perseverance, and commitment to excellence made this document possible. The guide is available to all under a CC-BY license and is especially crafted to inform legislators, government officials, education leaders, faculty, and content developers and aggregators.

Download the guide: http://www.siia.net/index.php?option=com_docman&task=doc_download&gid=4029&Itemid=318

FutureLearn is launched: First UK-led provider of massive open online courses

FutureLearn, the first UK-led provider of massive open online courses (MOOCs), began in September offering learners around the world access to free, high quality courses from its internationally renowned university partners.

Social interaction is central to the FutureLearn experience, enabling people to learn actively by engaging in conversations around the learning material, or vicariously, by following discussions. FutureLearn has also been designed to work on smart phones, tablets and desktop computers, so that learners can enjoy the same high-quality user experience, regardless of the screen size.

FutureLearn is wholly owned by The Open University. The new web site combines the best elements of the social web with The Open University’s 44 years of expertise in distance and open learning.

The FutureLearn web site opened on 18 September as an open beta, which will run until early 2014. Learners will be able to sign up for a selection of courses from FutureLearn’s university partners, with learner feedback used to inform the ongoing development of the website. So far learners from over 165 countries have registered their interest in taking a course on FutureLearn.

Simon Nelson, CEO of FutureLearn said, “We wanted to make FutureLearn a fresh, different and enjoyable user experience. We have designed the website in line with principles of effective learning, such as storytelling, discussions and celebrating progress. We decided to go live with FutureLearn now, in an open testing phase, so that we can remain responsive to learners as we continue to develop the website”.

Martin Bean, Vice-Chancellor of The Open University, said, “Time and again we have seen the disruptive impact the internet can have on industries – driving innovation and enhancing the customer experience. I have no doubt MOOCs will do the same for education – offering people new and exciting ways to learn. This is why we took the initiative to join forces with a range of university and cultural partners to create FutureLearn – spearheading the UK’s response to the rise of MOOCs and offering students a new and innovative way to access courses. It is so exciting to see the first of these going live and I can’t wait to see the range on offer expand over the coming months”.

In addition to a strong social architecture and performance on a range of devices, FutureLearn’s key features include:

* Inspiring and rich learning material: ideas are communicated in a variety of ways including video, audio and text articles. Many of these are designed to be shared and discoverable via web searches.

* Community learning: learner profile pages will help learners build a presence within FutureLearn, interact with and find out more about other learners, and follow and be followed in return. These features, based on the principles which underpin social networks, will encourage community learning.

* Contextual feedback for learners: quizzes are designed so that feedback and hints are given after each answer, to help learners spot gaps in their knowledge. Progress pages show how much of the course they have completed, their overall score, and how much they are interacting with others.

* Course creator and analytics dashboard: futureLearn helps its partners by providing a simple web-based course creation tool and up to the minute statistics on how learners are interacting with courses, to allow educators to respond to the needs of their learners.

* Record of learning: all learners will have an on screen record of learning that they can share beyond the FutureLearn community. FutureLearn will be also be piloting paid-for statements of accomplishment and real world exams at local test centres later this year.

A list of pilot courses from 20 of FutureLearn’s partner institutions is available for learners to sign up for now, with eight scheduled to begin between October and December this year:

1. Begin programming: build your first mobile game, from Reading university.

2. England in the time of King Richard III, from Leicester University.

3. Fairness and nature: when worlds collide, from Leeds University.

4. The Mind Is Flat: the shocking shallowness of human psychology, from Warwick University.

5. Improving your image: dental photography in practice, from Birmingham University.

6. Introduction to ecosystems, from The Open University.

7. The secret power of brands, from University Of East Anglia.

8. Web science: how the web is changing the world, from Southampton University.

Claire Davenport, Commercial Director at FutureLearn said, “Our partners already have a range of courses in production and our pipeline for 2014 should have something to appeal to everyone, whether studying to improve their career prospects, enrich their lives or enliven their dinner conversations”.

FutureLearn: http://www.futurelearn.com/

Development begins for shared national library services in Scotland and Wales

Scotland and Wales have started to undergo work to develop shared library IT systems across their higher education institutions thanks to initial funding and support from Jisc. Ultimately, this will provide students access to information hosted at all institutions, opening up a wealth of teaching and learning materials. There will also be cost saving opportunities.

Higher education institutions in Wales are currently joining with the national library of Wales to start development of a joint procurement process for a shared library management system. The shared system will open up potential opportunities for collaboration on other levels – including the possibility of reciprocal borrowing across the libraries and shared cataloguing of collections. They are looking to have these systems in place by summer 2015-2016 and a tender for the work will be going out in the New Year.

Tracey Stanley, deputy university librarian and Assistant Director of information services at Cardiff University has been heavily involved in the work says: “The Welsh Higher Education Libraries and the National Library of Wales have developed a compelling vision for a shared library system. A shared system will give us the opportunity to work more closely together for the benefit of our users, for example, on sharing content, collections or services. We also have an opportunity to share the costs of development and support, share expertise across Wales and work together to enhance our services”.

The first phase of the Scottish project, the benefits of sharing, has shown the benefits that a shared national IT support system could offer higher education and possibly further education institutional libraries. The key benefits include:

* All items from Scottish higher education institutional libraries and the National Library of Scotland being available and searchable to researchers and students, providing a higher quality service.

* Supported procurement, making shared services cost effective, allowing more funds to be spent on resources.

Phase two of the work has now begun and the team are working with a task force at the Scottish Confederation of University and Research Libraries (SCURL) to bring together a plan of what the service/systems would look like, for example what is included – room bookings, electronic support. They are hoping to have this is place by December and if a clear vision is developed a business plan will then be devised for implementation.

Mark Toole, director of information services at the University of Stirling, who is heavily involved in the project, says: “In Scotland, current Government policy is encouraging universities to work together, often through grant bids, to maximise overall research outcomes and impact. So the development of this type of national IT service has a lot of support and goodwill behind it”.

“We are grateful to Jisc for funding the initial investigations into this work and for supplying us with many tools that we can bring together and build on when we start to look at implementation, such as KnowledgeBase+. It is going to be challenging to ensure that we deliver a service that meets user needs, but the potential is there for a shared service to bring great benefits to all involved”.

Ben Showers, Programme Manager at Jisc explains: “The collaboration on the development of library systems and services in Scotland and Wales has the potential to transform the experience of students and researchers who attend university in these countries. It is easy to imagine the possibilities – seamless access to a wide range of content and resources, through to innovative services built on top of this new infrastructure such as powerful recommendation engines and integration with teaching and learning systems”.

“By collaborating on the essential infrastructure these universities are creating the resources and space that will enable them to develop the future services and systems that their students and researchers will need”.

For more information on the Welsh project visit their blog: http://blogs.cf.ac.uk/sharedlms/

Read the report of the first phase of the Scottish project: http://libraryblogs.is.ed.ac.uk/benefitsofsharing/files/2013/04/The-Benefits-Of-Sharing-Summary-Report.pdf

Adoption of mobile and social location-based services: new report from pew

The role of location in digital life is changing as growing numbers of internet users are adding a new layer of location information to their posts, and a majority of smartphone owners use their phones’ location-based services.

A new report by the Pew Research Center’s internet project sheds light on three major aspects of how location figures in digital life:

1. Many people use their smartphones to navigate the world: 74 percent of adult smartphone owners ages 18 and older say they use their phone to get directions or other information based on their current location.

2. There is notable growth in the number of social media users who are now setting their accounts to include location in their posts: among adult social media users ages 18 and older, 30 percent say that at least one of their accounts is currently set up to include their location in their posts, up from 14 percent who said they had ever done this in 2011.

3. There is a modest drop in the number of smartphone owners who use “check in” location services: some 12 percent of adult smartphone owners say they use a geosocial service to “check in” to certain locations or share their location with friends, down from 18 percent in early 2012. Among these geosocial service users, 39 percent say they check into places on Facebook, 18 percent say they use Foursquare, and 14 percent say they use Google Plus, among other services.

Taken together, these trends show the ascent of location awareness and the role it might play in the life of users – and the technology companies that are scrambling to provide more alert-style applications that tell people who and what is near them.

Local is a bigger part of the broader social media landscape, and the rise of local services is strongly tied to the increase in smartphone ownership. The majority of smartphone owners say they are making use of their phones’ location-based services, and the share of all adults who do this continues to grow along with increasing smartphone adoption.

“The location layer is a core aspect of the smartphone experience, one that brings a new dimension to how people find and share information on the go”, said Kathryn Zickuhr, Research Associate for the Pew Research Center’s Internet Project and author of the report. “And for an increasing number of social media users, location tagging offers a new way to share context around photos and other information they share on their social networks”.

Yet even as most smartphone owners use their phones abilities to get location-specific information, data from earlier surveys also shows that mobile users of all ages say they have turned off location-tracking features at some point due to privacy concerns:

* As of September 2012, almost half (46 percent) of teen app users say they have turned off the location tracking feature on their cell phone or in an app on a phone or tablet because they were worried about other people or companies being able to access that information.

* As of April 2012, in response to a different question, over a third (35 percent) of adult cell app users said they have turned off the location-tracking feature on their cell phone because they were concerned that other individuals or companies could access that information.

“Perhaps unsurprisingly, our research has shown that location is considered to be sensitive information”, Zickuhr said. “So even though most smartphone owners use their phone’s location-tracking feature for information, many also disable that feature at various times to prevent third parties from accessing that same information”.

Read/download the full report: http://pewinternet.org/Reports/2013/Location.aspx

Related articles