First time users

Tips to first time users!

OpenDataBio is software to be used online. Local installations are for testing or development, although it could be used for a single-user production localhost environment.

User roles

  • If you are installing, the first login to an OpenDataBio installation must be done with the default super-admin user: and password1. These settings should be changed or the installation will be open to anyone reading the docs;
  • Self-registrations only grant access to datasets with privacy set to registered users and allows user do download data of open-access, but do not allow the user to edit nor add data;
  • Only full users can contribute with data.
  • But only super admin can grant full-user role to registered users - different OpenDataBio installations may have different policies as to how you may gain full-user access. Here is not the place to find that info.

See also User Model.

Prep your full-user account

  1. Register yourself as Person and assign it as your user default person, creating a link between your user and yourself as collector.
  2. You need at least a dataset to enter your own data
  3. When becoming a full-user, a restricted-access Dataset and Project will be automatically created for you (your Workspaces). You may modify these entities to fit your personal needs.
  4. You may create as many Projects and Datasets as needed. So, understand how they work and which data they control access to.

Entering data

There three main ways to import data into OpenDataBio:

  1. One by one through the web Interface
  2. Using the OpenDataBio POST API services:
    1. importing from a spreadsheet file (CSV, XLXS or ODS) using the web Interface
    2. using the OpenDataBio R package client
  3. When using the OpenDataBio API services you must prep your data or file to import according to the field options of the POST verb for the specific ‘endpoint’ your are trying to import.

Tips for entering data

  1. If first time entering data, you should use the web interface and create at least one record for each model needed to fit your needs. Then play with the privacy settings of your Workspace Dataset, and check whether you can access the data when logged in and when not logged in.
  2. Use Dataset for a self-contained set of data that should be distributed as a group. Datasets are dynamic publications, have author, data, and title.
  3. Although ODB attempt to minimize redundancy, giving users flexibility comes with a cost, and some definitions, like that of Traits or Persons may receive duplicated entries. So, care must be taken when creating such records. Administrators may create a ‘code of conduct’ for the users of an ODB installation to minimize such redundancy.
  4. Follow an order for importation of new data, starting from the libraries of common use. For example, you should first register Locations, Taxons, Persons, Traits and any other common library before importing Individuals or Measurements
  5. There is no need to import POINT locations before importing Individuals because ODB creates the location for you when you inform latitude and longitude, and will detect for you to which parent location your individual belongs to. However, if you want to validate your points (understand where such point location will placed), you may use the Location API with querytype parameter specified for this.
  6. There are different ways to create PLOT and TRANSECT locations - see here Locations if that is your case
  7. Creating Taxons require only the specification of a name - ODB will search nomenclature services for you, find the name, metadata and parents and import all of the them if needed. If you are importing published names, just inform this single attribute. Else, if the name is unpublished, you need to inform additional fields. So, separate the batch importation of published and unpublished names into two sets.
  8. The notes field of any model is for both plain text or JSON object string formatted data. The Json option allows you to store custom structured data any model having the notes field. You may, for example, store as notes some secondary fields from original sources when importing data, but may store any additional data that is not provided by the ODB database structure. Such data will not be validate by ODB and standardization of both tags and values depends on you. Json notes will be imported and exported as a JSON string, and will be presented in the interface as a formatted table; URLs in your Json will be presented as links.