Tutorial: generating Linked Data with YARRRML

YARRRML is a human-readable text-based representation for declarative Linked Data generation rules.
It is a subset of YAML, a widely used data serialization language designed to be human-friendly.
It can be used to represent R2RML, RML, and
SPARQL-Generate rules.
This tutorial introduces you to YARRRML
by explaining how to generate Linked Data from existing data sources with YARRRML rules.
You can use Matey to run this tutorial's examples yourself and in the browser.

Before we start the tutorial

Learning objective

At the end of the tutorial
you will be able to generate RDF
from multiple, existing data sources in different formats
by manually writing YARRRML rules.

Prerequisites

We assume that you understand Linked Data and more specific the
Resource Description Framework (RDF).
However, the basic concepts of RDF are still explained.
We assume the concepts of vocabularies and ontologies, such as classes, properties, and datatypes.

How to use the tutorial

There are three ways to complete this tutorial:
you read the explanations and either

  • read the examples
  • try out the examples yourself by writing rules in your browser
  • try out the examples yourself by writing rules on your computer

Writing rules in your browser

This is the quickest way to get started:
open Matey in a new tab.
Matey is a browser-based IDE that allows you to load existing data sources, write YARRRML rules, and
see the corresponding RDF.

Writing rules on your computer

This is completely optional and not required for this tutorial!
It requires more work but allows you to complete the tutorial using an editor of your choice.
Here are the steps to follow:

  1. Make sure you have a recent version of Node.js installed.

  2. Install the yarrrml-parser, which translates YARRRML rules to RML rules.

    npm i @rmlio/yarrrml-parser -g
    
  3. Make sure you have a recent version of Java installed.

  4. Make sure you have a recent version of the RMLMapper.
    You can either download a release or build it from source.

  5. Write your YARRRML rules. For example, in the document called rules.yml.

  6. Translate the YARRRML rules and execute the RMLMapper with the corresponding RML rules.

    yarrrml-parser -i rules.yml -o rules.rml.ttl
    java -jar /path/to/rmlmapper.jar -m rules.rml.ttl
    

The generated RDF triples and quads are directed to standard output,
but you can send them to a file by using the option -o /path/to/outputfile of the RMLMapper.

Concepts

Existing data sources

The existing data sources contain the data that you want to annotate.
These data can be in different formats, such as CSV, XML, and JSON, and
they can come from different origins, such as files, databases, and Web APIs.

RDF

In this section, we give a short recap of RDF terms, triples, and quads.

Terms

Data in the RDF is represented using either an
Internationalized Resource Identifier (IRI),
a literal, or
a blank node.
They are known as RDF terms.
IRIs are a generalization of URIs that
permits a wider range of Unicode characters.
For example, http://example.com/john and http://example.com/country/belgium.
Literals are used for values such as strings, numbers, and dates.
A literal consists of two or three elements:

  • a lexical form, for example, "John"
  • a datatype IRI, for example, http://www.w3.org/2001/XMLSchema#string
  • if and only if the datatype IRI is http://www.w3.org/1999/02/22-rdf-syntax-ns#langString,
  • a non-empty language tag, for example, "en"

Blank nodes are disjoint from IRIs and literals.
Unlike IRIs and literals, blank nodes do not identify specific resources.
Statements involving blank nodes say that something with the given relationships exists, without explicitly naming it.

Triples

An RDF triple consists of three components:

  • the subject, which is an IRI or a blank node
  • the predicate, which is an IRI
  • the object, which is an IRI, a literal or a blank node

A triple is conventionally written in the order subject, predicate, object.

Quads

An RDF quad consists of four components:

  • the subject, which is an IRI or a blank node
  • the predicate, which is an IRI
  • the object, which is an IRI, a literal or a blank node
  • the graph, which is an IRI or a blank node

A quad is conventionally written in the order subject, predicate, object, graph.

Prefixed names

IRIs may be written as prefixed names.
Therefore, a prefix and the corresponding namespace need to be defined.
For example, if we define "ex" as the prefix for the namespace http://example.com,
then we can write 'http://example.com/john' as ex:john.

Rules

YARRRML rules are declarative rules that define how RDF is generated by annotating multiple, existing data sources.

Document

YARRRML rules are contained in a document.

Example

Consider the following CSV file called "people.csv":

id firstname lastname debut episode hair color
0 Natsu Dragneel 1 pink
1 Gray Fullbuster 2 dark blue
2 Gajeel Redfox 21 black
3 Lucy Heartfilia 1 blonde
4 Erza Scarlet 4 scarlet

It contains information about five different characters, corresponding with the five rows, that appear in the same TV show.
The information includes their id, first name, last name, the number of their debut episode, and hair color.
We would like to annotate every character and generate the corresponding RDF triples and quads.

For example, consider the character described in the first row:

0 Natsu Dragneel 1 pink

We need to define the IRI (or blank node) that represents this character,
which will be used in the triples and quads that provide information about this character.
We will use the concatenation of "http://example.com/" and the id as IRI.
This results in ex:0 for this character, when using the prefix ex for http://example.com.

We annotate every character with the class schema:Person.
This results in the predicate and object: a schema:Person.
Furthermore, this results in the triple ex:0 a schema:Person,
by combining this predicate and object with the aforementioned subject.
We annotate the first name of the character with the property schema:givenName and
the last name with the property schema:familyName.
This results in the predicates and objects: schema:givenName "Natsu" and schema:familyName "Dragneel".
We annotate the number of the debut episode with the property e:debutEpisode and
additionally say that the number is an integer via the datatype xsd:integer.
This results in the predicate and object: e:debutEpisode "1"^^xsd:integer.
We annotate the upper case version of the hair color with
the property dbo:hairColor and
additionally say that the color is written in the English language.
This results in the predicate and object: dbo:hairColor "PINK"@en.

The resulting RDF triples are

ex:0 a schema:Person;
ex:0 schema:givenName "Natsu";
ex:0 schema:familyName "Dragneel".
ex:0 e:debutEpisode "1"^^xsd:integer.
ex:0 dbo:hairColor "PINK"@en.

In the following sections,
we explain what rules you need to generate such triples, and
how you write them using YARRRML.

What rules are needed

Two sets of rules are needed:

  • rules that describe the existing data sources
  • rules that define how the RDF terms are generated from these data sources,
    and how these terms are used to generate triples and quads.

In our example, we need rules that define:

  • how the IRI representing a character is generated
  • that this IRI is used as subject of the triples and quads
  • that the class of a character is schema:Person,
  • that the first name is annotated with the property schema:givenName,
  • that the last name is annotated with the property schema:familyName,
  • that the number of the episode they debuted in is annotated with the property e:debutEpisode,
    and is of the datatype xsd:integer
  • that the hair color is annotated with the property dbo:hairColor,
  • that the uppercase version of the hair color is used,
  • that the hair color is provided in the English language, and
  • their link with the episodes.

Furthermore, we want to add certain triples to specific graphs:

  • all triples of characters to the graph ex:Characters, and
  • all episode-related triples of characters to the graph ex:Episodes.

How to start a YARRRML document

A minimum YARRRML document looks as follows:

mappings:

Rules that describe entities are found under mappings,
which is at the root of the document.
It is possible that more than one entity needs to be described.
Therefore, rules are grouped per entity and are given a unique key.
In this example, we use people as the key for the characters.

Note that people has to be indented with at least one space to make it part of `mappings.
Indention has a very special meaning in YARRRML, as in YAML: child elements are indented more than the parent element.

mappings:
  people:

A set of prefixes and namespaces are predefined by default.
Custom prefixes can be added by adding to the key prefixes,
which is at the root at the document.
Each combination of a prefix and namespace is added to this collection as a key-value pair.
In the following example four prefixes are defined: "ex", "e", "dbo", and "grel".

prefixes:
  ex: http://www.example.com/
  e: http://myontology.com/
  dbo: http://dbpedia.org/ontology/
  grel: http://users.ugent.be/~bjdmeest/function/grel.ttl#

Note that similar to people both ex and e have to be indented with at least one space.

When you define a prefix that is also a default prefix,
then it is overwritten by your namespace.

What data to use

We need two or three elements to describe which data is used:

  • location of the data source (via the key access)
  • how we refer to the data within the data source (via the key referenceFormulation)
  • how we iterate over the data (via the key iterator; optional)

The data in our example is in a CSV file called "people.csv".
We describe that in YARRRML via

access: people.csv
referenceFormulation: csv

Note that in the case of CSV we iterate over all rows.
Thus, you do not need to provide how to iterate over the data.
For other formats, such as JSON and XML, you need the iterator (discussed later on).

We add this value to sources,
which is part of people:

mappings:
  people:
    sources:
      - access: people.csv
        referenceFormulation: csv

There is also shorter way to write this: [people.csv~csv].
The value of access is written before ~ and the value of referenceFormulation after.
The result is

mappings:
  people:
    sources:
      - [people.csv~csv]

How to generate subjects

We define how the IRI of a character is generated.
This IRI is used as the subject of the RDF triples for the entity.
We add this definition by adding a new value to s (short for subjects).

mappings:
  people:
    sources:
      - ['people.csv~csv']
    s: ex:$(id)

The value ex:$(id) states that the prefix ex is concatenated with the value of the column "id".
The use of $(...) allows to use values of the data sources.
In this case, we can refer to values in the different columns.
This specific rule results in the following IRIs as subjects for the characters: ex:0, ex:1, ex:2, ex:3, and ex:4.

How to generate predicates and objects

How to annotate an entity with a class

In our example, we need to annotate every character with the class schema:Person.
This is done by adding a value, with the keys p and o to po.
p has the value for the predicate, and
o has the value for the object

mappings:
  people:
    sources:
      - ['people.csv~csv']
    s: ex:$(id)
    po:
      - p: a
        o: schema:Person

The following triples are generated using these rules.

ex:0 a schema:Person .
ex:1 a schema:Person .
ex:2 a schema:Person .
ex:3 a schema:Person .
ex:4 a schema:Person .

Note that a is shortcut for rdf:type.
Thus,

p: a
o: schema:Person

is the same as

p: rdf:type
o: schema:Person

There is a shortcut version available for this via the array notation: [a, schema:Person].
The value is an array where the first element is the predicate (a, the value of the key p) and
the second element the object (schema:Person, the value of the key o).

mappings:
  people:
    sources:
      - ['people.csv~csv']
    s: ex:$(id)
    po:
      - [a, schema:Person]

How to annotate an entity with a property

We define that every character is annotated with its first name,
which can be found in the column "firstname",
via schema:givenName.
This is done by adding a another array value to po.

mappings:
  people:
    sources:
      - ['people.csv~csv']
    s: ex:$(id)
    po:
      - [a, schema:Person]
      - [schema:givenName, $(firstname)]

The array is [schema:givenName, $(firstname)], where schema:givenName is the first element (predicate)
and $(firstname) the second element (object).
Note that the latter will take the value in the column "firstname" and use that as object,
because "firstname" is enclosed with $(...).
The following triples are generated using these rules.

ex:0 a schema:Person .
ex:0 schema:givenName "Natsu" .
ex:1 a schema:Person .
ex:1 schema:givenName "Gray" .
ex:2 a schema:Person .
ex:2 schema:givenName "Gajeel" .
ex:3 a schema:Person .
ex:3 schema:givenName "Lucy" .
ex:4 a schema:Person .
ex:4 schema:givenName "Erza" .

Next, we define that every character is annotated with its last name,
which can be found in the column "lastname",
via schema:familyName.
This is done by adding [schema:familyName, $(lastname)] to po.
The following triples are generated using these rules.

ex:0 a schema:Person .
ex:0 schema:givenName "Natsu" .
ex:0 schema:familyName "Dragneel" .
ex:1 a schema:Person .
ex:1 schema:givenName "Gray" .
ex:1 schema:familyName "Fullbuster" .
ex:2 a schema:Person .
ex:2 schema:givenName "Gajeel" .
ex:2 schema:familyName "Redfox" .
ex:3 a schema:Person .
ex:3 schema:givenName "Lucy" .
ex:3 schema:familyName "Heartfilia" .
ex:4 a schema:Person .
ex:4 schema:givenName "Erza" .
ex:4 schema:familyName "Scarlet" .

How to define the datatype of a literal value

We define that every character is annotated with its debut episode's number,
which can be found in the column "debut episode",
via e:debutEpisode.
This is done by adding the following to po:

p: e:debutEpisode
o:
  value: $(debut episode)

The following extra triples are generated using this specific rule.

ex:0 e:debutEpisode "1" .
ex:1 e:debutEpisode "2" .
ex:2 e:debutEpisode "21" .
ex:3 e:debutEpisode "1" .
ex:4 e:debutEpisode "4" .

However, we want to say that the literal value is of the datatype xsd:integer.
For example, for the first triple we want the object to be "1"^^xsd:integer.
This can be achieved by updating the rule to:

p: e:debutEpisode
o:
  value: $(debut episode)
  datatype: xsd:integer

The key datatype is added with the desired datatype xsd:integer as value.
The resulting triples are

ex:0 e:debutEpisode "1"^^xsd:integer .
ex:1 e:debutEpisode "2"^^xsd:integer .
ex:2 e:debutEpisode "21"^^xsd:integer .
ex:3 e:debutEpisode "1"^^xsd:integer .
ex:4 e:debutEpisode "4"^^xsd:integer .

The shortcut version for values of po can also be used in this case:
[e:debutEpisode, $(debut episode), xsd:integer].
Here a third element is added to the array: xsd:integer, which is the desired datatype.

How to define the language of a literal value?

We define that every character is annotated with its hair color,
which can be found in the column "hair color",
via dbo:hairColor.
This is done by adding [dbo:hairColor, $(hair color), en~lang] to po.

p: dbo:hairColor
o:
  value: $(hair color)

The following extra triples are generated using this specific rule.

ex:0 dbo:hairColor "pink" .
ex:1 dbo:hairColor "dark blue" .
ex:2 dbo:hairColor "black" .
ex:3 dbo:hairColor "blonde" .
ex:4 dbo:hairColor "scarlet" .

However, we want to say that the literal value is in English.
For example, for the first triple we want the object to be "pink"@en.
This can be achieved by updating the rule to:

p: dbo:hairColor
o:
  value: $(hair color)
  language: en

The key language is added with
the desired language en (i.e., English) as value.
The resulting triples are

ex:0 dbo:hairColor "pink"@en .
ex:1 dbo:hairColor "dark blue"@en .
ex:2 dbo:hairColor "black"@en .
ex:3 dbo:hairColor "blonde"@en .
ex:4 dbo:hairColor "scarlet"@en .

The shortcut version for values of po can also be used in this case:
[dbo:hairColor, $(hair color), en~lang].
Here a third element is added to the array: en~lang, which is the desired language en together with ~lang.
Without ~lang the value will be considered a datatype.

Note that you cannot define a datatype and language at the same time,
as the datatype of a literal with a language-tag is predefined.

We need to link the episodes with the characters.
Details of the four episodes can be found in the file "episodes.csv":

|number|title |airdate|
|1 |Fairy Tail |12/10/2009|
|2 |The Fire Dragon, the Monkey, and the Ox|19/10/2009|
|3 |Infiltrate! The Everlue Mansion! |26/10/2009|
|4 |DEAR KABY |02/11/2009|

We define the following rules:

mappings:
  episode:
    sources:
      - ['episodes.csv~csv']
    s: ex:episode_$(number)
    po:
      - [a, schema:Episode]
      - [schema:title, $(title)]

Every episode is a schema:Episode and is annotated with its title via schema:title.

The following triples are generated.

ex:episode_1 a schema:Episode;
  schema:title "Fairy Tail".
ex:episode_2 a schema:Episode;
  schema:title "The Fire Dragon, the Monkey, and the Ox".
ex:episode_3 a schema:Episode;
  schema:title "Infiltrate! The Everlue Mansion!".
ex:episode_4 a schema:Episode;
  schema:title "DEAR KABY".

We need to link the episodes with the characters.
There is a relationship established between
the two via the debut episode's number of a character and the number of an episode.
Thus, we add the following rules for the characters:

po:
  - p: e:appearsIn
    o:
      mapping: episode
      condition:
        function: equal
        parameters:
          - [str1, $(debut episode), s]
          - [str2, $(number), o]

We are not able to use the shortcut/array notation to define the predicate and object.
p: e:appearsIn states that we want to use the predicate e:appearsIn.
mapping: episode states that we want to create a link between the characters and the episodes,
which are identified by the key episode.
condition defines when characters and episodes are linked.
More specific, links are only made when "debut episode" of a character equals "number" of an episode.
function: equals define that the equal function is used.
This function has two parameters: str1 and str2.
Therefore, parameters has two elements.
- [str1, $(debut episode), s] states that the value of "debut episode" is coming from
the subject of the triples (via s at the end),
which is the character, and is used as value for str1.
- [str2, $(number), o] states that the value of "number" is coming from the object of the triples (via o at the end),
which is the episode, and is used as value for str2.

Note that str1 and str2 can be switched as this does not influence the result of the equal function.

This results in the following extra triples:

ex:0 e:appearsIn ex:episode_1 .
ex:1 e:appearsIn ex:episode_2 .
ex:3 e:appearsIn ex:episode_1 .
ex:4 e:appearsIn ex:episode_4 .

Note that for ex:2 no triple is generated as information about the 21th episode is not provided.

How to add triples to a graph

It is possible to add all triples to graphs and po-specific triples to a graphs, and by doing so generating quads.

How to add all triples to a graph

We need to add all triples about characters to the graph ex:Characters.
This is done by adding the key graphs to people,
together with the corresponding value that defines the graph.

mappings:
  people:
    graphs: ex:Characters

Note that it is possible to use $(...) in the same way as for the subject, predicate, and object.

The following quads are generated for the first character.

ex:0 a schema:Person ex:Characters .
ex:0 schema:givenName "Natsu" ex:Characters .
ex:0 schema:familyName "Dragneel" ex:Characters .
ex:0 e:debutEpisode "1"^^xsd:integer ex:Characters .
ex:0 dbo:hairColor "pink"@en ex:Characters .
ex:0 e:appearsIn ex:episode_1 ex:Characters .

How to add po-specific triples to a graph

We need to add po-specific triples about characters to the graph ex:Episodes.
More specific, all triples that are related to episodes should be in a separate graph.
This is done by adding the key
graphs fourth and
sixth element of po,
together with the corresponding value that defines the graph.

mappings:
  people:
    graphs: ex:Characters
    po:
      - [a, schema:Person]
      - [schema:givenName, $(firstname)]
      - [schema:familyName, $(lastname)]
      - p: e:debutEpisode
        o:
         value: $(debut episode)
         datatype: xsd:integer
        graphs: ex:Episodes
      - [dbo:hairColor, $(hair color), en~lang]
      - p: e:appearsIn
        o:
          mapping: episode
          condition:
            function: equal
            parameters:
              - [str1, $(debut episode), s]
              - [str2, $(number), o]
        g: ex:Episodes

Note that we had to expand the fourth element of po as graphs cannot be defined when using the array-based notation.

Note that g is a shortcut for graphs.

The following quads are generated for the first character.

ex:0 a schema:Person ex:Characters .
ex:0 schema:givenName "Natsu" ex:Characters .
ex:0 schema:familyName "Dragneel" ex:Characters .
ex:0 e:debutEpisode "1"^^xsd:integer ex:Characters .
ex:0 dbo:hairColor "pink"@en ex:Characters .
ex:0 e:appearsIn ex:episode_1 ex:Characters .
ex:0 e:debutEpisode "1"^^xsd:integer ex:Episodes .
ex:0 e:appearsIn ex:episode_1 ex:Episodes .

How to transform the data

It is possible to transform the existing data before using it in the triples and quads, by applying functions.
In our example, we need to use the uppercase version of the hair colors.
This is done by replacing [dbo:hairColor, $(hair color), en~lang] with

p: dbo:hairColor
o:
  function: grel:toUpperCase
  parameters:
    - [grel:valueParameter, $(hair color)]
  language: en

We call the function grel:toUpperCase (defined via the key function)
with the value of "hair color" as the value for the parameter grel:valueParameter (via the key parameters),
which is required for this function.
The full description of the function and its parameters is:

grel:toUpperCase a fno:Function ;
  fno:name "to Uppercase" ;
  rdfs:label "to Uppercase" ;
  dcterms:description "Returns the input with all letters in upper case." ;
  fno:expects ( grel:valueParam ) ;
  fno:returns ( grel:stringOut ) ;
  lib:providedBy [
    lib:localLibrary "GrelFunctions.jar";
    lib:class "GrelFunctions";
    lib:method "toUppercase"
  ].

grel:valueParam a fno:Parameter ;
  fno:name "input value" ;
  rdfs:label "input value" ;
  fno:type xsd:string ;
  fno:predicate grel:valueParameter .

The value that is used for function can be found on the first line of the description (grel:toUppercase).
The parameter is described on the five last lines.
The triple with the predicate fno:predicate defines the name of the parameter,
which is used in the rules (grel:valueParameter).
The function is linked to its parameters via the triple with the predicate fno:expects.

Note that functions, including their parameters and implementations, are defined outside YARRRML.
Thus, custom functions can be added at all times by anyone.

The following triples are generated.

ex:0 dbo:hairColor "PINK"@en .
ex:1 dbo:hairColor "DARK BLUE"@en .
ex:2 dbo:hairColor "BLACK"@en .
ex:3 dbo:hairColor "BLONDE"@en .
ex:4 dbo:hairColor "SCARLET"@en .

Complete YARRRML document

The complete YARRRML document is

prefixes:
  ex: http://www.example.com/
  e: http://myontology.com/
  dbo: http://dbpedia.org/ontology/
  grel: http://users.ugent.be/~bjdmeest/function/grel.ttl#

mappings:
  people:
    sources:
      - ['people.csv~csv']
    s: ex:$(id)
    graphs: ex:Characters
    po:
      - [a, schema:Person]
      - [schema:givenName, $(firstname)]
      - [schema:familyName, $(lastname)]
      - p: e:debutEpisode
        o:
         value: $(debut episode)
         datatype: xsd:integer
        graphs: ex:Episodes
      - p: dbo:hairColor
        o:
          function: grel:toUpperCase
          parameters:
            - [grel:valueParameter, $(hair color)]
          language: en
      - p: e:appearsIn
        o:
          mapping: episode
          condition:
            function: equal
            parameters:
              - [str1, $(debut episode), s]
              - [str2, $(number), o]
        g: ex:Episodes
  episode:
    sources:
      - ['episodes.csv~csv']
    s: ex:episode_$(number)
    po:
      - [a, schema:Episode]
      - [schema:title, $(title)]

Other data formats

Besides CSV, it is also possible to generate Linked Data from existing data sources in the JSON and XML format.

JSON

Consider the following JSON file called "episodes.json" that
represents the episode data instead of a CSV file:

{
  "episodes": [
    {"number": 1, "title": "Fairy Tail", "airdate":"12/10/2009"},
    {"number": 2, "title": "The Fire Dragon, the Monkey, and the Ox", "airdate":"19/10/2009"},
    {"number": 3, "title": "Infiltrate! The Everlue Mansion!", "airdate":"26/10/2009"},
    {"number": 4, "title": "DEAR KABY", "airdate":"02/11/2009"}
  ]
}

In the case of CSV, we considered every row as an entity.
However, in the case of JSON we need to specify what represents an entity.
This is done via the iterator,
which in this example is $.episodes[*], as we want to iterate over every object inside the array episodes.
The corresponding rules are:

mappings:
  episode:
    sources:
      - access: episodes.json
        referenceFormulation: jsonpath
        iterator: "$.episodes[*]"

Here, we can also use a shortcut version:

mappings:
  episode:
    sources:
      - [episodes.json~jsonpath, "$.episodes[*]"]

A second element is added to the array, which is the iterator.
Note that in this case we use the JSONPath syntax to
define the iterator (hence the ~jsonpath-part).
The same syntax will be used to refer to the different values inside the objects.

XML

Consider the following XML file called "episodes.xml" that
represents the episode data instead of a CSV file:

<episodes>
  <episode>
    <number>1</number>
    <title>Fairy Tail</title>
    <airdate>12/10/2009</airdate>
  </episode>
  <episode>
    <number>2</number>
    <title>The Fire Dragon, the Monkey, and the Ox</title>
    <airdate>19/10/2009</airdate>
  </episode>
  <episode>
    <number>3</number>
    <title>Infiltrate! The Everlue Mansion!</title>
    <airdate>26/10/2009</airdate>
  </episode>
  <episode>
    <number>4</number>
    <title>DEAR KABY</title>
     <airdate>02/11/2009</airdate>
  </episode>
</episodes>

Similar to the JSON, we define the iterator for an XML file.
This time by using the XPath syntax.
The corresponding rules are:

mappings:
  episode:
    sources:
      - [episodes.xml~xpath, /episodes/episode]

Wrapping up

Congratulations!
You have created your own YARRRML rules that:

  • generate RDF from data about characters and episodes,
  • use data in CSV, JSON, and XML files,
  • add triples to graphs,
  • link entities, and
  • transform data.

Nice work!
We hope you now feel like you have a decent grasp on how YARRRML works.

More information

You can find more information in the following:

This tutorial is created based on the research findings in the following publications:

  • Reece, Gwendolyn J.
    "Critical thinking and cognitive transfer: Implications for the development of online information literacy tutorials."
    Research strategies 20.4 (2005): 482-493.
  • Mestre, Lori S.
    "Student preference for tutorial design: A usability study."
    Reference Services Review 40.2 (2012): 258-276.
  • Alred, Gerald J., Charles T. Brusaw, and Walter E. Oliu.
    Handbook of technical writing. Macmillan, 2009.
  • Newcastle University.
    Writing Effective Learning Outcomes.

Table of contents