Datomic Pro and Datomic Cloud are FREE!

Using Datomic from Groovy, Part 1: Everything is Data

In this post, I will demonstrate transacting and querying against Datomic from Groovy.  The examples shown here are based on the following schema, for a simple social news application:


There are a number of more in-depth samples in the datomic-groovy-examples project on Github.

Why Groovy

Groovy offers four key advantages for a Java programmer using Datomic:
  • Groovy provides interactive development through groovysh, the Groovy shell.  When combined with Datomic's dynamic, data-driven style, this makes it easy to interactively develop code in real time.  The source code for this post has a number of other examples designed for interactive study within the Groovy shell.
  • Groovy's collection literals make it easy to see your data.  Lists and maps are as easy as:
aList = ['John', 'Doe'];
aMap = [firstName: 'John',
lastName: 'Doe'];
  • Groovy's closures make it easy to write functions, without the noise of single-method interfaces and anonymous inner classes. For instance, you could grab all the lastNames from a collection of maps
lastNames = people.collect { it['lastName'] }

  • Of the popular expressive languages that target the JVM, Groovy's syntax is most similar to Java's.

Transactions

A Datomic transaction takes a list of data to be added to the database, and returns a future map describing the results. The simplest possible transaction is a list of one sublist that adds an atomic fact, or datom, to the database, using the following shape:

conn.transact([[op, entityId, attributeName, value]]);

The components above are:
  • op is a keyword naming the operation. :db/add adds a datom, and :db/retract retracts a datom.
  • entityId is the numeric id of an entity.  You can use tempid when creating a new entity.
  • attributeName is a keyword naming an attribute.
  • value is the value of an attribute.  The allowed types for an attribute include numerics, strings, dates, URIs, UUIDs, binaries, and references to other entities.
Keywords are names, prefixed with a colon, possibly with a leading namespace prefix separated by the slash char, e.g.

:hereIsAnUnqualifiedName
:the.namespace.comes.before/theName

Putting this all together, you might add a new user's first name with:

conn.transact([[':db/add', newUserId, ':user/firstName', 'John']]);

If you are adding multiple datoms about the same entity, you can use a map instead of a list, with the special keyword :db/id identifying the entity.  For example, the following two transactions are equivalent:

// create an entity with two attributes (map form)
conn.transact([[':db/id': newUserId,
':user/firstName': 'John',
':user/lastName': 'Doe']]);

// create an entity with two attributes (list form)
conn.transact([[':db/add' newUserId, ':user/firstName', 'John'],
[':db/add' newUserId, ':user/lastName', 'Doe']]);

Let's look next at composing larger transactions out of smaller building blocks. You have already seen creating a user:

newUser = [[':db/id': newUserId,
':user/email': 'john@example.com',
':user/firstName': 'John',
':user/lastName': 'Doe']];

Notice that this time we did not call transact yet, instead we just stored data describing the user into newUser.   

Now imagine that you have a collection of story ids in hand, and you want to create a new user who upvotes those stories.   Groovy's collect method iterates over a collection, transforming values using a closure with a default single parameter named it. We can use collect to build new assertions that refer to each story in a collection of storyIds:

upvoteStories = storyIds.collect {
[':db/add', newUserId, ':user/upVotes', it]
}

Now we are ready to build a bigger transaction out of the pieces.  Because transactions are made of data, we don't need a special API for this.  Groovy already has an API for concatenating lists, called +:

conn.transact(upvoteAllStories + newUser);

Building Datomic transactions from data has many advantages over imperative or object-oriented approaches:
  • Composition is automatic, and requires no special API.
  • ACID transactionality is scoped to transaction calls, and does not require careful management across separate calls to the database.
  • Because they are data, Datomic transactions are flexible across system topology changes: they can be built offline for later use, serialized, and/or enqueued.

Query

The Datomic query API is named q, and it takes a query plus one or more inputs. The simple query below takes a query plus a single input, the database db, and returns the id of every entity in the system with an email address:

q('''[:find ?e 
:where [?e :user/email]]''', db);

Keyword constants in the :where clause constrain the results.  Here, :user/email constrains results to only those entities possessing an email.  

Symbols preceded by a question mark are variables, and will be populated by the query engine.  The variable ?e will match every entity id associated with an email.

A query always returns a set of lists, and the :find clause specifies the shape of lists to return.  In this example, the lists are of size one since a single variable ?e is specified by :find.

Note that the query argument to q is notionally a list.  As a convenience, you can pass the query argument as either a list or (as shown here) as an edn string literal.

The next query further constrains the result, to find a specific email address:

q('''[:find ?e
:in $ ?email
:where [?e :user/email ?email]]''',
db, 'editor@example.com');

There are several things to see here.  There are now two inputs to the query: the database itself, and the specific email "editor@example.com" we are looking for.  Since there is more than one input, the inputs must be named by an :in clause.  The :in clause names inputs in the order they appear:
  1. $ is Datomic shorthand for a single database input. 
  2. ?email is bound to the scalar "editor@example.com".
Inputs need not be scalar. The shape [?varname ...] in an :in clause is called a collection binding form, and it binds a collection instead of a single value. The following query looks up two different users by email:

q('''[:find ?e
:in $ [?email ...]
:where [?e :user/email ?email]]''',
db, ['editor@example.com', 'stuarthalloway@datomic.com']);

Another way to join is by having more than one constraint in a :where clause.  Whenever a variable appears more than once, it must match the same set of values in all the locations that it appears.  The following query joins through ?user to find all the comments for a user:

q('''[:find ?comment
:in $ ?email
:where [?user :user/email ?email]
[?comment :comment/author ?user]]''',
db, 'editor@example.com')

We have only scratched the surface here.  Datomic's query also supports rules, predicates, function calls, cross-database queries (with joins!), aggregates, and even queries against ordinary Java collections without a database.  In fact, the Datalog query language used in Datomic supports a superset of the capabilities of the relational algebra that underpins SQL.

Conclusion

In this installment, you have seen the powerful, compositional nature of programming with generic data.  In Part 2, we will look at the database as a value, and explore the implications of having a lazy, immutable database inside your application process.

..

Sync


Background

Datomic's approach to updating peers uses a push model. Rather than have every read request route to the same server in order to get consistent data, data is stored immutably, and as soon as there is new information, all peers are notified. This completely eliminates polling any server. Thus, contrary to common presumption, when you ask the connection for the db value, there is no network communication involved: you are immediately given the local value of the db about which the connection was most recently informed.

Everyone sees a valid, consistent view. You can never see partial transactions, corruption/regression of timelines, causal anomalies etc. Datomic is always 'business rules' valid, and causally consistent.

Motivation

That does not mean that every peer sees the same thing simultaneously. Just as in the real world, it is never the case that everyone sees the same thing "at the same time" in a live distributed system.  There is no inherent shared truth, as you might convey a message to me about X at the speed of light but I can only perceive X at the speed of sound. Thus, I know X is coming, but I might have to wait for it.

This means that some peer A might commit a transaction and tell B about it before B is informed via the normal channels. This is an interesting case, as it has to do with perception and propagation delays. It is not a question of consistency, it is a question of communication synchronization.

It comes up when you would like to read-your-own-writes via other peers (e.g. when a client hits different peer servers via a load balancer), and when there is out-of-band communication of writes (A tells B about its write before the transactor does).

Tools

We've added a new sync API to help you manage these situations.

The first form of sync takes a basis point (T). It returns a future that will be fulfilled with a version of the db that includes point T. This does not cause any additional interaction with the transactor - the future will be filled by the normal communication on the update channels. But it saves you from having to poll for arrival. Most often, you will already have the requested T, and the future will complete immediately. This is the preferred method to use if you have any ability to convey the basis T, either in the message from A to B, or e.g. in cookies as a client hits different peers using a load balancer. You can easily get the basis T for any db value you have in hand.

The second form of sync takes no arguments, and works via 'ping' of the transactor. It promises not to return until all transactions that have been acknowledged by the transactor at the time sync was called have arrived at this peer. Thus if A has successfully committed a transaction and told B about it, and B then calls sync(), the database returned by sync will include A's transaction.

Conclusion

While these synchronization tools are powerful, make sure you use them only when necessary. The Datomic defaults were designed to leverage the inherent parallelism possible given immutable, accretion-only semantics and distributed storage. Notifications to peers are sent at the same time as the acknowledgement to the peer submitting the transaction, and thus are as 'simultaneous' as network communication can be. The sync tools need only be utilized to enforce cross-peer causal relationships.

..

A Whirlwind Tour of Datomic Query

Introduction

This tour is to help those new to Datomic understand Datomic's built-in datalog by providing a simple domain and schema, and by walking through some use cases.  For a more complete treatment of Datomic's query capabilities, please take a look at the documentation.

Reading the Code

To follow along in code, you can download Datomic Free Edition at http://downloads.datomic.com/free.html, and the sample project at https://github.com/datomic/day-of-datomic. The code examples should be executed interactively, in the order presented in the article, from a Clojure REPL. The complete code is in the query_tour.clj file. The => prefix indicates responses that is printed by the REPL. The use of ellipsis () in output indicates that a larger result has been truncated for brevity.

A Simple Schema

The example queries that follow work against a simplified schema that you might use for a social news database, containing users, stories, and comments:

Get Connected

In order to get a database connection, you will need to require namespaces for the Datomic API and for the news application, then setup a sample database.

Listing 1: Getting Connected

(require
'[datomic.api :as d]
'[datomic.samples.news :as news])

(def uri "datomic:mem://news")
(def conn (news/setup-sample-db-1 uri))

Get a Database Value

The connection from the previous step is already populated with a schema and sample data, but before you query against it, you must get the current value of the database. Syntactically this is trival:

Listing 2: Getting a Database Value

(def db (d/db conn))

While the syntax of this step is trivial, the semantic implications are deep. Datomic queries do not happen "over there" in some database process, they happen here, in your process's memory space.

A first query

Users in the system are identified by a :user/email attribute. Lets find all of them:

Listing 3: Finding All Users

(d/q '[:find ?e
:where [?e :user/email]]
db)
 
=> #{[17592186045424] [17592186045425]}

The large integers are entity ids. Entity ids are auto-assigned, so you may not see the same values on your system. There are several things to note here:
  • As query is the most commonly used function in Datomic, it has the terse name d/q.
  • The first argument is a query expression. The :where clause indicates what you want to find, "those entities ?e that have a :user/email attribute." The :find clause tells which variables to return.
  • The second argument is the database value we obtained in the previous step.

A simple join

Rather than finding all users, let's query for a particular user. This requires joining two query inputs: the database value, and the email you are seeking:

Listing 4: Finding a Specific User

(d/q '[:find ?e
:in $ ?email
:where [?e :user/email ?email]]
db
"editor@example.com")
 
=> #{[17592186045425]}

Notice that there are now three arguments to the query: the query expression plus the two inputs. Also, since there are two inputs, there is now an :in clause to name the inputs, in the order they appear, e.g. $ binds the database, and ?email binds "editor@example.com". Names starting with $ name databases, and names starting with ? name variables.

A database join

The previous join of a database to a single prebound variable may not even look like a join to you – in many databases this operation would be called a parameterized query. So let's look at a more traditional join, that leverages indexes in the database to associate more than one "type" of entity. The following query will find all of the editor's comments:

Listing 5: Finding a User's Comments

(d/q '[:find ?comment
:in $ ?email
:where [?user :user/email ?email]
[?comment :comment/author ?user]]
db
"editor@example.com")
=> #{[17592186045451]
[17592186045450]
... }
Notice that the :where clause now has two data patterns: one associating users to their emails, and another associating comments to the user that authored them. Because the variable ?user appears in both clauses, the joins on ?user, finding only comments made by the editor.

Aggregates

You can obtain aggregates by changing the :find clause to include an aggregating function. Instead of finding the editor's comments, let's just count them:

Listing 6: Returning an Aggregate

(d/q '[:find (count ?comment)
:in $ ?email
:where [?user :user/email ?email]
[?comment :comment/author ?user]]
db
"editor@example.com")
 
=> [[10]]

More joins

Queries can make many joins. The following query joins comments to their referents, and joins the referents to :user/email to find any comments that are about people:

Listing 7: Multiple Joins

(d/q '[:find (count ?comment)
:where [?comment :comment/author]
[?commentable :comments ?comment]
[?commentable :user/email]]
db)
=> []

No results is good news, because you don't want to let things get personal by allowing people to comment on other people.

Schema-aware joins

You cannot comment on people, but what kinds of things can you comment on?

The three slots you have seen in data patterns so far are entity, attribute, and value. Up this point, the entity and value positions have typically been variables, but the attribute has always been constant.
That need not be the case. Here is a query that finds all the attributes of entities that have been commented on:

Listing 8: A Schema Query

(d/q '[:find ?attr-name
:where [?ref :comments]
[?ref ?attr]
[?attr :db/ident ?attr-name]]
db)
=> #{[:story/title]
[:comment/body]
[:story/url]
[:comment/author]
[:comments]}

Schema entities are ordinary entities, like any other data in the system. Rather then return their entity ids (?attr in the query above), you can join through :db/ident to find the programmatic identifiers that name each attribute. Judging from these attribute names, there are comments about stories and comments about other comments, exactly what you would expect given the schema.

Entities

In client/server databases where query happens in another process, it is often critical to get the answer and all of its details in a single query. (If you spread the work across multiple steps, you run the risk of the data changing between steps, leading to inconsistent results.)

Since Datomic performs queries in-process, against an immutable database value, it is feasible to decompose the work of query into steps. One very decomposition is to use a query to find entities, and then to use Datomic's entity API to navigate to the relevant details about those entities.
Let's try it with the editor. First, use a query to find the editor's entity id:

Listing 9: Finding an Entity ID

(def editor-id (->> (d/q '[:find ?e
:in $ ?email
:where [?e :user/email ?email]]
db
"editor@example.com")
ffirst))

Notice that the query returns only the ?e, not any particular attribute values.
Now, you can call d/entity, passing the database value and the entity id to get the editor entity.

Listing 10: Getting an Entity

(def editor (d/entity db editor-id))

When you first look at an entity, it appears to be a mostly empty map:

Listing 11: A Lazy Entity

editor
=> {:db/id 17592186045425}

That is because entities are lazy. Their attribute values will appear once you ask for them:

Listing 12: Requesting an Attribute

(:user/firstName editor)
=> "Edward"

If you are feeling more eager, you can touch an entity to immediately realize all of its attributes:

Listing 13: Touching an Entity

(d/touch editor)
=> {:user/firstName "Edward",
:user/lastName "Itor",
:user/email "editor@example.com",
:db/id 17592186045425}

Are Entities ORM?

No. Entities are used for some of the same purposes that you might use an ORM, but their capabilities are quite different. Entities differ from ORM objects in that the conversion between raw datoms and entities is entirely a mechanical process. There is never any configuration, and relationships are always available and (lazily!) navigable.

Entities also differ from most ORM objects in that relationships can be navigated in either direction. In the previous code example, you saw how the d/touch method would automatically navigate all outbound relationships from an entity. However, you can also navigate inbound relationships, by following the convention of prefixing attribute names with an underscore. For example, a user's comments happen to be modeled as a relationship from the comment to the editor. To reach these comments from the editor entity, you can navigate the :comment/author attribute backwards:

Listing 14: Navigating Backwards

(-> editor :comment/_author)
=> [{:db/id 17592186045441}
{:db/id 17592186045443}
... ]

This process can, of course, be extended as far as you like, e.g. the following example navigates to all the comments people have made on the editor's comments:

Listing 15: Navigating Deeper

(->> editor :comment/_author (mapcat :comments))
=> ({:db/id 17592186045448}
{:db/id 17592186045450}
...)

Time travel

Update-in-place databases can tell you about the present, but most businesses need also to know about the past. Datomic provides this, by allowing you to take a value of a database as of a certain point in time.

Given any datom, there are three time-related pieces of data you can request:
  • the transaction entity tx that created the datom
  • the relative time, t of the transaction
  • the clock time :db/txInstant of the transaction
The transaction entity is available as a fourth optional component of any data pattern. The following query finds the transaction that set the current value for the editor's first name:

Listing 16: Querying for a Transaction

(def txid (->> (d/q '[:find ?tx
:in $ ?e
:where [?e :user/firstName _ ?tx]]
db
editor-id)
ffirst))

Given a transaction id, the d/tx->t function returns the system-relative time that the transaction happened.

Listing 17: Converting Transaction to T

(d/tx->t txid)
=> 1023

Relative time is useful for "happened-before" type questions, but sometimes you want to know the actual wall clock time. This is stored once per transaction, as the :db/txInstant property of the transaction entity:

Listing 18: Getting a Tx Instant

(-> (d/entity (d/db conn) txid) :db/txInstant)
=> #inst "2013-02-20T16:27:11.788-00:00"

Given a t, tx, or txInstant value, you can travel to that point in time with d/as-of. The example below goes back in time to before the point that the first name Edward was introduced, to see its past value:

Listing 19: Going Back in Time

(def older-db (d/as-of db (dec txid)))
 
(:user/firstName (d/entity older-db editor-id))
=> "Ed"

The example above shows an as-of value of the database being used as an argument to d/entity, but as-of values can be used anyhere a current database value can be used, including as an argument to query.

Auditing

While as-of is useful for looking at a moment in the past, you may also want to perform time-spanning queries. This is particularly true in audit scenarios, where you might want a complete history report of some value.

To perform such queries, you can use the d/history view of a database, which spans all of time. The following query shows the entire history of the editor's :user/firstName attribute:

Listing 20: Querying Across All Time

(def hist (d/history db))

(->> (d/q '[:find ?tx ?v ?op
:in $ ?e ?attr
:where [?e ?attr ?v ?tx ?op]]
hist
editor-id
:user/firstName)
(sort-by first))
 
=> ([13194139534319 "Ed" true]
[13194139534335 "Ed" false]
[13194139534335 "Edward" true])

The optional fifth field in a :where data pattern, named ?op in the example above, matches true if a datom is asserted, or false if it is being retracted.

Transaction 13194139534319 set the editor's first name to "Ed", and transaction 13194139534335 set it to "Edward". The cardinality of :user/firstName is one, which means that the system will only permit one value of that attribute per entity at any given time. To enforce this constaint, Datomic will automatically retract past values where necessary. Thus the transaction that asserts "Edward" also includes a retraction for the previous value "Ed".

Everything is Data

One of Datomic's design goals is "put declarative power in the hands of developers." Having queries that run in your process goes a long way to meet this objective, but what if you have data that is not in a Datomic database?

Datomic's query engine can run without a database, against arbitrary data structures in your application.
The following example shows the query you began with, this time running against a plain Java list in memory:

Listing 21: Querying Plain Java Data

(d/q '[:find ?e
:where [?e :user/email]]
[[1 :user/email "jdoe@example.com"]
[1 :user/firstName "John"]
[2 :user/email "jane@example.com"]])
=> #{[1] [2]}

The idea behind POJOs (plain old Java objects) is exactly right, and Datomic encourages you to take it one step further, to plain old lists and maps. ..

Excision

Motivation

It is a key value proposition of Datomic that you can tell not only what you know, but how you came to know it.  When you add a fact:

conn.transact(list(":db/add", 42, ":firstName", "John"));

Datomic does more than merely record that 42's first name is "John".  Each datom is also associated with a transaction entity, which records the moment (:db/txInstant) the datom was recorded.


Given these reified transactions, it is possible to track the history of information.  Let's say John decides he prefers to go by "Jack":

conn.transact(list(":db/add", 42, ":firstName", "Jack"));

When you assert a new value for a cardinality-one attribute such as :firstName, Datomic will automatically retract any past value (cardinality-one means that you cannot have two first names simultaneously).  So now the database looks like this:

Given this information model, it is easy to see that Datomic can support queries that tell you:
  • what you know now
  • what you knew at some point in the past
  • how and when you came to know any particular datom
So far so good, but there is a fly in the ointment.  In certain situations you may be forced to excise data, pulling it out root and branch and forgetting that you ever knew it.  This may happen if you store data that must comply with privacy or IP laws, or you may have a regulatory requirement to keep records for seven years and then "shred" them.  For these scenarios, Datomic provides excision.

Excision in Datomic

You can request excision of data by transacting a new entity with the following attributes:
  • :db/excise is required, and refers to a target entity or attribute to be excised. Thus there are two scenarios - excise all or part of an entity, or excise some or all of the values of a particular attribute.
  • :db.excise/attrs is an optional, cardinality-many, reference attribute that limits an excision to a set of attributes, useful only when the target of excise is an entity. (If :db.excise/attrs are not specified, then all matching attributes will be excised.)
  • :db.excise/beforeT is an optional, long-valued attribute that limits an excision to only datoms whose t is before the specified beforeT, which may be a t or tx id. This can be used with entity or attribute targets.
  • :db.excise/before is an optional, instant-valued attribute that limits an excision to only datoms whose transaction time is before the specified before. This can be used with entity or attribute targets.

Example: Excising Specific Entities

To excise a specific entity, manufacture a new entity with a :db/excise attribute pointing to that entity's id.  For example, if user 42 requests that his personal data be removed from the system, the transaction data would be:

[{:db/id #db/id[db.part/user],
  :db/excise 42}]

Since :db.excise/attrs is not specified in the transaction data above, all datoms about entity 42 will be excised.

Example: Excising a Window in Time

To excise old values of a particular attribute, you can create an excision for the attribute you want to eliminate, and then limit the excision using either before or beforeT.  Imagine tracking application events that have users, categories, and details.  Your application produces a ton of events, but you don't care about the old ones.  Here is a transaction that will excise all the pre-2012 events:

  [{:db/id #db/id[db.part/user],
    :db/excise :event/user
    :db.excise/before #inst "2012"}
   {:db/id #db/id[db.part/user],
    :db/excise :event/category
    :db.excise/before #inst "2012"}
   {:db/id #db/id[db.part/user],
    :db/excise :event/description
    :db.excise/before #inst "2012"}]

Remembering That You Forgot

It is a key value proposition of Datomic that you can tell not only what you know, but how you came to know it.  This seems to be at odds with excision: if you remember what you forgot, then you didn't really forget it!

You cannot remember what you forgot, but you can remember that you forgot.  Excise attributes are ordinary attributes in the database, and you can query them.  The following query would tell you if datoms about entity 42 have ever been excised:

[:find ?e :where [?e :db/excise 42]]

Once you find those entities, you can of course use the entity API to navigate to the specific attribute and before filters of the excisions.

Excise attributes are protected from excision, so you cannot erase your tracks.  (Other important datoms such as schema are also protected, see the documentation for full details.)

Handle With Care

Excision is different from any other operation in Datomic.  While excision requests are transactions, excision itself is not transactional.  Excision will happen on the next indexing job.

Excision is permanent and unrecoverable.  Take a backup before performing significant excisions, and use excision only when your domain requires that you deliberately forget certain data.  
..

Riak and Couchbase Support


We are pleased to announce today preliminary support for two new storage services for Datomic: Riak and Couchbase.

Riak is an elasticly scalable, distributed, redundant and highly available key-value store in the Dynamo model. It is a great option for Datomic users who want to run such a system system on their own premises (vs e.g. DynamoDB on AWS). Because Riak supports only eventual consistency at this time, a Datomic system running on Riak also utilizes Apache ZooKeeper, a highly-available coordination service. Datomic uses ZooKeeper for transactor failover coordination, and for the handful of keys per database that need to be updated with CAS. The bulk of the data is kept in Riak, immutably, and leverages all of Riak's redundancy and availability characteristics..

Couchbase is an elasticly scalable, distributed and redundant document store. Like Riak, it supports redundant storage. Unlike Riak, it does offer consistency and CAS, trading off for a more conventional availability model, with either manual or automatic failover.

Both solutions are backed with commercial support offerings from the people who make them.

With all three services (Riak, ZooKeeper, Couchbase), Datomic can run on an existing installation alongside other applications without conflict. Thus you can combine your use of Datomic with other uses of the storages, at which they excel. We consider this hybrid use to be highly appealing in practice, as different parts of your applications have different requirements.

As always, you can backup and restore to/from these storages and any other, and switch your application from one storage to another with a change to a different URI.

This makes the set of Datomic storages:

  • In-process memory
  • Transactor-local dev/free mode
  • SQL
  • DynamoDB
  • Riak
  • Couchbase
  • Infinispan


We are very excited about these new options, which greatly expand your choices, especially for non-cloud deployments. Each storage represents different tradeoffs, but what is important is that the choices are yours to make, as you decide what best fits your business and technical requirements.

We look forward to your feedback as we fine tune these integrations for production use.

..


11 of 13