Open Data Can Empower Archaeologists

Over the last few years, the DART project collected large amounts of archaeological data, and as part of the project, a group of archaeologists created a purpose-built data repository to catalogue data and make them available, using CKAN, the Open Knowledge Foundation’s (OKF) open-source data catalogue and repository. In their recent blog post of OKF, Anthony Beck and Dave Harrison talk about the project and its progress, and revisit the meaning of open data and open science for archaeologists.

figshare Now Takes Code, Software and Scripts

Last week, figshare announced a forthcoming functionality to sync GitHub repos through the figshare upload page. Today, figshare released this new functionality — now you can upload code, software and scripts, and receive academic credit for them.

Related post by Data Forwards (March 12, 2014): Blog: For Improved Access to Software and Code

Research Data Needs Assessment Study by the University of Iowa

The data management report, authored by Shawn Averkamp, Xiaomei Gu, and Ben Rogers, presents survey and interview findings from their recent data needs assessment study. This report was commissioned by the University of Iowa Libraries with the intention of performing a survey of the campus landscape and identifying gaps in data management services. The first stage of data collection consisted of a survey conducted during summer 2012 to which 784 responses were received. The second phase of data collection consisted of approximately 40 in-depth interviews with individuals from the campus and were completed during summer 2013. Findings are presented within five broad areas of data management: 1) data management planning, 2) data storage, 3) data organization and analysis, 4) data publishing and dissemination and, 5) sensitive data and compliance, with additional findings reported in the areas of research culture and funding models.

Digital Curation Center Publishes a New How-to Guide

Digital Curation Centre (DCC) has released in their series of How-to Guides: How to Discover Requirements for Research Data Management Services.

  • What is this guide about? This guide provides an overview of the RDM service development context, and details methods used to plan, elicit, analyse, document and prioritize service users’ requirements.
  • Who should read? This guide is meant for people whose role involves developing services or tools to support research data management (RDM) and digital curation, whether in a Higher Education Institution or a project working across institutions.

Sharing Qualitative Data: The Launch of the Qualitative Data Repository

While data sharing, research transparency, and replication have customarily been prominent concerns for quantitative researchers, they are increasingly being seen as relevant for the qualitative tradition. The Qualitative Data Repository (QDR), which recently came online at Syracuse University, aims to select, ingest, curate, archive, manage, durably preserve, and provide access to digital data used in qualitative and multi-method research in the social sciences.  The repository develops and publicizes common standards and methodologically informed practices for these activities, as well as for the reusing and citing of qualitative data. It is hosted by the Center for Qualitative and Multi-Method Inquiry (a unit of Syracuse University’s Maxwell School of Citizenship and Public Affairs), and funded by the National Science Foundation.

Related articles by:


‘Data in Brief’ Articles Make Reproducibility a Reality

The first volume of an open access journal Genomics Data, data journal by Elsevier, was published in December 2013. Paige Shaklee recently wrote an ElsevierConnect article summarizing the context and goal of this new data journal that helps researchers make the most of their data. Dr. Shaklee says in her article, “(a)lthough this precious genomic data is uploaded into public repositories, sadly, few people dare to touch the data because it is too complicated to understand. Data files are often mislabeled, data may be raw or analyzed and analysis from dataset to dataset is highly variable, experimental subtleties are not mentioned, and software code used to filter through data is not available. The lack of reproducibility has far-reaching consequences.” And she continues, “(n)ow, these kinds of accompanying details must be documented in a Specifications table at the top of each Data in Brief. The journal’s Editorial Board also checks that any related software or programming code is submitted alongside the Data in Brief.”

Call for Proposals for Digital Preservation 2014

The deadline for Digital Preservation 2014 proposal submissions is quickly approaching. Digital Preservation 2014 will be held July 22-24 in the Washington, DC area. The annual summer meeting brings together the broad, diverse digital preservation and stewardship community to share achievements in the areas of technical infrastructure, innovation, content collection, standardization, and outreach and education efforts.

They are looking for your ideas, accomplishments, and project updates that highlight, contribute to, and advance the community dialog. Areas of interest include, but are not limited to:

  • Scientific data and other content at risk of obsolescence, and what methods, techniques, and tools are being deployed to mitigate risk
  • Innovative methods of digital preservation, especially regarding sustainable practices, community approaches, and software solutions
  • Collaboration successes and lessons learned highlighting a wide-range of digital preservation activities, such as best practices, open source solutions, project management techniques, and emerging tools
  • Practical examples of research and scholarly use of stewarded data or content
  • Educational trends for emerging and practicing professionals