[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ANN] gzochi project development release 0.10

From: Amirouche Boubekki
Subject: Re: [ANN] gzochi project development release 0.10
Date: Sat, 13 Aug 2016 21:33:43 +0200
User-agent: Roundcube Webmail/1.1.2

On 2016-08-13 20:33, Julian Graham wrote:
Hi Amirouche!

Can you explain in more details what this B+tree based storage engine is?
And what is used for?


Some context: gzochi is an application server for games written in
Guile. It provides various services to the applications that it hosts,
including data storage. The underlying implementation of the data
store (the "storage engine") is configurable - the framework comes
with a Berkeley DB storage engine and an in-memory storage engine.
(And you can write your own, if you wish.)

All of the data structures manipulated by a game running in the gzochi
container - the game's "object graph" - are serialized and persisted
to the data store. The container guarantees a consistent view of the
object graph to each "task" that runs as part of the game, and it
handles changes to the graph transactionally. It's up to the storage
engine implementation to make sure that every transaction is isolated
from every other transaction, that deadlocks are resolved properly,
that each transaction either commits or rolls back atomically, etc.
The Berkeley DB-based storage engine relies on BDB for those things.
The in-memory engine uses a B+tree and intentional (R/W) locking to
implement transactions.

The B+tree storage engine has a second function when running gzochi in
a distributed / high-availability configuration: It's used to cache a
subset of the object graph on each node in the cluster while that node
is executing game code that manipulates that part of the graph. (In
this configuration, the centralized "meta server" is responsible for
storing and retrieving the canonical version of the object graph.)

Does that answer your question?

Yes thanks!

Based on a few tests wiredtiger is faster than berkeley db. You might
consider having a look at it.


Amirouche ~ amz3 ~

reply via email to

[Prev in Thread] Current Thread [Next in Thread]