DBreeze Documentation Actual
DBreeze Documentation Actual
(DBreeze v. 1.084.2017.0321)
It's a free software for those, who thinks that it should be free.
Please, notify us about our software usage, so we can evaluate and visualize its efficiency.
Document evolution.
This document evolves downside. All new features, if you have read the base document
before, will be reflected underneath. New evolution always starts from a mark in format [year
month day] - [20120521] - for easy search.
[20120509]
Getting started.
DBreeze.dll contains fully managed code without references to other libraries. Current DLL
size is around 327 KB. Start using it by adding its reference to your project. Don’t forget
DBreeze.XML from Release folder to get VS IntelliSense help.
DBreeze is a disk based database system, though it also can work like in-memory storage.
Dbreeze doesn’t have virtual file system underneath and resides all working files in your OS
file system, that’s why you must instantiate its engine by supplying a folder name where all
files will be located.
if(engine == null)
engine = new DBreezeEngine(@"D:\temp\DBR1");
It’s important in the Dispose function of your application or DLL to call DBreeze engine
Dispose, to have graceful application termination.
if(engine != null)
engine.Dispose();
After you have instantiated the engine two options will be available for you, either to work
with the database scheme or to work with the transactions.
Scheme.
You don’t need to create tables via scheme, it’s needed to make manipulations with already
existing objects.
Deleting table:
engine.Scheme.DeleteTable(string userTableName)
Renaming table:
engine.Scheme.RenameTable(string oldTableName, string newTableName)
Later more functions will be added there and their description here.
Transactions
In DBreeze all operations with the data, which resides inside of the tables, must occur inside
of the transaction.
Please note, that it’s important to dispose transaction after all necessary operations are
done (using-statement makes it automatically).
Please note, that one transaction can be run only in one .NET managed thread and can not
be delegated to other threads.
Please note, that nested transactions are not allowed (parent transaction will be terminated)
During in-transactional operations different things can happen that’s why we highly
recommend to use try-catch block together with the transaction and log exceptions for the
future analysis.
try
{
using (var tran = engine.GetTransaction())
{
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
Every table in DBreeze is a key/value storage. On the low level, keys and values represent
arrays of bytes - byte[].
On the top level you can choose your own data type, from allowed list, to be stored as a key
or value.
There are some not standard data types in DBreeze, added for usability, they are accessible
inside of DBreeze.DataTypes namespace.
using DBreeze.DataTypes;
byte[]
int
uint
long
ulong
short
ushort
byte
sbyte
DateTime
double
float
decimal
string - this one will be converted into byte[] using UTF8 encoding
DbUTF8 - this one will be converted into byte[] using UTF8 encoding
DbAscii - this one will be converted into byte[] using Ascii encoding
DbUnicode - this one will be converted into byte[] using Unicode encoding
char
byte[]
int
int?
uint
uint?
long
long?
ulong
ulong?
short
short?
ushort
ushort?
byte
byte?
sbyte
sbyte?
DateTime
DateTime?
double
double?
float
float?
decimal
decimal?
string - this one will be converted into byte[] using UTF8 encoding
DbUTF8 - this one will be converted into byte[] using UTF8 encoding
DbAscii - this one will be converted into byte[] using Ascii encoding
DbUnicode - this one will be converted into byte[] using Unicode encoding
bool
bool?
char
char?
DbXML<T>
DbMJSON<T>
DbCustomSerializer<T>
they are used for storing objects inside of the value, we will talk about them later.
All operations with the data, except operations which can be done via scheme, must be done
inside of the transaction scope. By pressing tran. intellisense will give you a list of all
possible operations. We start from inserting data into the table.
public void Example_InsertingData()
{
using (var tran = engine.GetTransaction())
{
tran.Insert<int, int>("t1", 1, 1);
tran.Commit();
}
}
In this example we have inserted data into the table with the name “t1”. Table will be
created automatically, if it doesn’t exist.
Key type for our table is int 1, value type of table is also int (also 1).
After one or series of modifications inside of the transaction we must either Commit them or
Rollback them.
Note, Rollback function will automatically run in the transaction Dispose function, so all
not committed modifications of the database inside of transaction will be automatically
rolled-back.
You can be sure that this modification will not be applied to the table, but nevertheless empty
table will be created, if it doesn’t exist before.
We don’t store in the table data types, which you assume must be there, table holds only
byte arrays of keys and values and only on the upper level acquired byte[] will be converted
into keys or values of the appropriate data types from generic constructions.
You can modify more then one table inside of the transaction.
tran.Commit();
//or
//tran.Rollback();
tran.Insert<int, int>("t1", 2, 1);
tran.Insert<uint, string>("t2", 2, “world”);
tran.Commit();
Used Commit or Rollback will be applied to all modifications inside of the transaction. If
something happens during Commit all data will be automatically rolled-back for all
modifications.
The only acceptable reason for Rollback fail can be the damage of the physical storage, and
exceptions in the rollback procedure will bring database to the not operable state.
DBreeze database, after its start, checks transactions journal and restores tables into their
previous state, so there should be no problems with the power loss or any other accidental
software termination in any process execution point.
Commit operation is always very fast and takes the same amount of time independent of
the quantity of modifications made.
Rollback can take longer, depending upon the quantity of data and character of
modifications, which were made within the database.
If you are going to insert or update a big data set then first execute insert, update, remove
command as many times as you need and then call tran.Commit();
Calling tran.Commit after every operation, will not make table physical file bigger but will
take more time then one Commit after all operations.
tran.Commit();
//THIS IS SLOWER
for(int i=0;i<1000000;i++)
{
tran.Insert<int,int>(“t1”,i,i)
tran.Commit();
}
}
Dbreeze algorithms are built to work with maximum efficiency while inserting in bulk sorted
ascending data.
for(int i=0;i<1000000;i++)
{
tran.Insert<int,int>(“t1”,i,i);
}
tran.Commit();
//or
DateTime dt=DateTime.Now;
for(int i=0;i<1000000;i++)
{
tran.Insert<DateTime, int >(“t1”,dt,i);
dt=dt.AddSeconds(7);
}
tran.Commit();
The above code will execute 9 seconds (year 2012 and 1.5 seconds in year 2015).
If you start to insert data in random order it can take a bit longer. That’s why, if you have
in-memory big data set, before saving it to the database, sort it ascending in-memory by key
and insert after that, it will speed up your program.
If you make a copy from other databases to DBreeze, take a chunk (e.g. 1 MLN records),
sort it in memory by key ascending, insert into DBreeze, then take another chunk.. and so
on.
In DBreeze maximal key length in bytes is 65535 (UInt16.MaxValue) and maximal value
length is 2147483647 (Int32.MaxValue).
It’s not possible to save as a value byte array bigger than 2GB. For bigger data elements we
will have to develop in the future other strategy (read DataBlocks later).
In DBreeze we have ability of a partial value update or insert. It’s possible because values
are stored as byte[]. It doesn’t matter which data type is stored already in the table you can
always access it and change as byte array.
DBreeze has special namespace inside, which allows you easily to work with byte arrays.
using DBreeze.Utils;
Now you can convert any standard data type into byte array and back.
Above instructions can be run one by one and will bring to the result then under key 10 we
will have value 1.
And the same result we will achieve having run 4 following instructions:
The fourth parameter of tran.InsertPart is exactly the index from which we want to insert our
byte[] array.
This technique can be used if we think about the value as about the set of columns of the
known length, like in standard SQL databases, and gives an ability to change every column
separately, without changes in other parts of the values.
Note, you can always switch to byte[] data type in values and in keys
tran.Insert<int, int>
//or
tran.Insert<int, byte[]>
Note, If you want to insert or update value starting from the index which is bigger then
current value length, the empty space will be filled with byte[] { 0 }.
We didn’t have before key 12 and now we are executing following commands:
"0x80 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x80"
Note, Dbreeze will try to use the same physical file space while record update, if existing
record length is suitable for this.
After select you must supply in generic format data types for the key and value.
In our case, we want to read from table “t1” key of type int (its value 10).
We can start to visualize the key value only after checking, if table has such value inside.
if (row.Exists)
{
key = row.Key;
res = row.Value;
So, if row exists, we can start to fetch its key (row.Key), full record row.Value (it will be
automatically converted from byte[] to the data type, which you gave while forming Select).
And independent from the record data type, Row has method GetValuePart with overloads
which will help you to get value partially and always as byte[]. DBreeze.Utils extensions can
help to convert values to other data types.
If we had in the value, starting from index 4 stored some kind of ulong, which resides 8
bytes, we can say:
ulong x = row.GetValuePart(4,8).To_UInt64_BigEndian();
Note, that DBreeze.Utils conversion algorithms are exactly sharpened for DBreeze
data types, because they create sortable byte[] sequences in compare with .NET built
in byte[] conversion functions.
tran.Insert<int,int?>(“t1”,10,null);
if(row.Exists)
{
int? val = row.Value; //val will be null
}
DateTime dt = DateTime.Now;
for (int i = 0; i < 1000000; i++)
{
tran.Insert<DateTime, byte?>("t1", dt, null);
dt = dt.AddSeconds(7);
}
tran.Commit();
DBreeze.Diagnostic.SpeedStatistic.StopCounter("INSERT");
DBreeze.Diagnostic.SpeedStatistic.PrintOut(true);
DBreeze.Diagnostic.SpeedStatistic.StartCounter("FETCH");
DBreeze.Diagnostic.SpeedStatistic.StopCounter("FETCH");
DBreeze.Diagnostic.SpeedStatistic.PrintOut(true);
}
To limit the quantity of the data you can use, either break iteration or use Take statement:
SelectForward - starts from the first key and iterates forward to the last key in sorted
ascending order.
SelectBackward - starts from the last key and iterates backward to the first key in sorted
descending order.
DON’T USE LINQ after SelectForward or SelectBackward for filtering like this:
Because it will work much much much slower then specially sharpened methods,
use instead:
tran.SelectForwardStartFrom<int,int>("t1",10,false).Take(10)
and
You remember that all data types will be converted into byte[].
then
SelectForwardStartsWith<byte[],int>(“t1”,new byte[] {0x12})
and
SelectForwardStartsWith<byte[],int>(“t1”,new byte[] {0x10, 0x17})
tran.Insert<string,string>(“t1”,”w”,”w”);
tran.Insert<string,string>(“t1”,”ww”,”ww”);
tran.Insert<string,string>(“t1”,”www”,”www”);
then
SelectForwardStartsWith<string,string>(“t1”,”ww”)
will return us
“ww”
“www”
and SelectBackwardStartsWith<string,string>(“t1”,”ww”)
will return us
“www”
“ww”
this command skips “skippingQuantity” elements and then starts enumeration in ascending
order:
this command skips “skippingQuantity” elements backward and then starts enumeration in
descending order:
this command skips “skippingQuantity” elements from the specified key (if key is not found
then next one after it will be taken as skipped 1) and then starts enumeration in ascending
order:
this command skips “skippingQuantity” elements backward from the specified key and then
starts enumeration in descending order:
Note, that skip needs to iterate via keys, to calculate exact skipping quantity. That’s
why developer has always to take into consideration the idea of the finding compromise
between speed and skipping quantity. Skipping 1 MLN, of elements in any direction starting
from any key will take 4 seconds with Intel i7 8 cores and SCSI drive 8GB RAM (year 2012).
Skip of 100 000 records will take 400 ms, 10 000 will take 40 ms respectively.
So, if you are going to implement grid paging, then just remember first shown in the grid key
and then skip from it quantity of shown in the grid elements using SelectForwardSkipFrom
or SelectBackwardSkipFrom.
if (row.Exists)
{
//etc...
}
if (row.Exists)
{
//etc...
}
If you try to read from non-existing table, this table will no be created in the file system.
Range selects like tran.SelectForward etc. will return nothing in your foreach statement.
tran.RemoveKey<int>("t1",10)
tran.Commit();
Note, if withFileRecreation parameter is set to true, then we don’t need to Commit this
modification, it will be done automatically. The file who holds the table will be re-created.
Note, if withFileRecreation parameter is set to false, the old data will be not visible any
more, but the old information will still reside in the table. We need Commit after this
modification.
tran.Insert<int,int>(“t1”,10,10);
tran.ChangeKey<int>("t1", 10, 11);
tran.Commit();
we will have in the table one key 11 with the value 10.
tran.Insert<int,int>(“t1”,10,10);
tran.Insert<int,int>(“t1”,11,11);
tran.ChangeKey<int>("t1", 10, 11);
tran.Commit();
we will have in the table one key 11 with the value 10. (old value for the key 11 will be lost)
For storing objects in the table we have 3 extra data types which are accessible via
DBreeze.DataTypes namespace.
DbXML<T> - will automatically use built-in .NET XML serializer and de-serializer for objects.
Slower then others in both operations furthermore data resides much more physical space,
then others.
DBreeze.Utils.CustomSerializator.Serializator = JsonConvert.SerializeObject;
DBreeze.Utils.CustomSerializator.Deserializator = JsonConvert.DeserializeObject;
But if you don’t want to use JSON.NET, try Microsoft JSON. It’s about 40% slower on
deserialization and 5-10% slower on serialization then JSON.NET.
uint identity = 0;
if (row.Exists)
identity = row.Key;
identity++;
tran.Commit();
}
}
Note, DbMJSON, DbXML, DbMJSON,DbCustomSerializer have overloaded operator and
you can specify art without saying new DbMJSON<Article>, just say art:
Getting objects:
Multi-threading
In Dbreeze tables are always accessible for parallel READ of last committed data from
multiple threads.
Note, if one of threads needs, inside of the transaction, to read data from the tables and it
wants to be sure that till the end of transaction other threads will not modify the data, this
thread must reserve tables for synchronized read.
}
Transaction also has method for tables synchronization.
tran.SynchronizeTables
This method has overloads and you can supply as parameters: List<string> or params
string[].
If you think that there is no necessity to block table(s) and other threads could write data in
parallel just don’t use tran.SynchronizeTables.
This technique is applicable in all reporting cases. If user needs to know his bank account
state, we don’t need to block the table with account information, just read account state and
return it. Doesn’t matter that in this moment his account state is changing - it’s a question of
a moment. If user requests his account state in 5 minutes he will get already modified
account.
For example we make iteration via table Items, because someone has requested its full list.
//we have iterated over 50 items and in this moment other thread deleted itemId 1
and committed transaction
//Result: it’s a question of the moment this item will be added to the final List, it
doesn’t matter in this case.
//we have iterated already 75 items and in this moment other thread deleted itemId
90 and committed transaction
//after 89 we will get item 91
//Result: it’s a question of the moment, item 90 will not be added to the final List, it
doesn’t matter in this case.
And if you want to be sure that other threads will not modify “Items” table, while you are
fetching the data, use
tran.SynchronizeTables(“Items”);
If your data projection is spread among many tables, first get all pieces of the data
from different tables, always checking if row.Exists, in case of direct selects, and only
when you have full object constructed then return it to the final projection as a ready
element.
Note if you have received row and it exists. It doesn’t mean that you have already
acquired the value. Value will be read only when you choose property row.Value (lazy
value loading). If other thread removes value in between, after you have acquired the
row, but still didn’t acquired value, - then value will be returned in any case, because
after removing data still stays on the disk, only keys are marked as deleted. And this
behaviour for not synchronized read should be ok, because it’s a question of the
moment.
If you have acquired row and it exists, in one thread, now you are going to get the
value, but in this moment other thread updates value, then you thread will receive
updated value.
In case if your thread is going to retrieve value and in this moment DBreeze.Scheme
deletes table - then inside of transaction exception will be raised, controlled by
try-catch integrated into using statement.
Either in constructor, after engine initialization, or for temporary tables, which are used for
sub-computation with the help of database, and definitely only by one thread. For tables
which are under read-write pressure, better to use tran.RemoveAll(false) and then one day
to compact this table by copying existing values into new table, and renaming new table to
old table.
Copying of the data better to make on byte[] level, it will be faster then to cast and serialize /
de-serialize objects.
tran.Commit();
Note, we create foreach loop which reads from one table and after that writes into the other
table. From HDD point ov view we make such operation:
R-W-R-W-R-W-R-W …..
If you have mechanical HDD, its head must always move between two files to complete this
operation, what is not so efficient.
R-R-R-R-W-R-R-R-R-W-R-R-R-R-W ….
So, first we read to the memory a big chunk (1K/10K/100K/1MLN of records) and then sort it
by key in ascending order and insert it in bulk to the copy table.
Dictionary<TKey,TValue> will not be able to sort byte[]. For this we need to construct
hash-string using DBreeze.Utils:
then put this hash a key for Dictionary. Copy procedure with
R-R-R-R-W-R-R-R-R-W-R-R-R-R-W ….
sequence:
using DBreeze.Utils
int i = 0;
int chunkSize = 100000;
Dictionary<string,KeyValuePair<byte[],byte[]>> cacheDict=new
Dictionary<string,KeyValuePair<byte[],byte[]>>();
i++;
if(i == chunkSize)
{
//saving sorted values to the new table in bulk
foreach (var kvp in cacheDict.OrderBy(r=>r.Key))
{
tran.Insert<byte[],byte[]>(“Articles Copy”,kvp.Value.Key,
kvp.Value.Value);
}
cacheDict.Clear();
i=0;
}
}
tran.Commit();
Note, actually we don’t need to sort dictionary, because SelectForward from table Articles
gives us already sorted values and in sorted sequence they will migrate into
cache-Dictionary, so our complete code will look like this:
int i = 0;
int chunkSize = 100000;
Dictionary<byte[],byte[]> cacheDict=new Dictionary<byte[],byte[]>();
if(i == chunkSize)
{
//saving sorted values to the new table in bulk
foreach (var kvp in cacheDict)
{
tran.Insert<byte[],byte[]>(“Articles Copy”,kvp.Key, kvp.Value);
}
cacheDict.Clear();
i=0;
}
}
tran.Commit();
This technique is used when you need to get data (select) before modification (insert or
update etc.):
tran.SynchronizeTables(tableUserInfo);
//after SynchronizeTables, be sure that none of the other threads will write in
table tableUserInfo, till the transaction will be released.
decimal accountState = 0;
if(row.Exists)
accountState = row.Value;
accountState += sum;
tran.Commit();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
return false;
}
}
return true;
}
If we write only in one table inside of transaction and for other tables use unsynchronized
read, we don’t need to use SynchronizeTables
But when we have inserted/updated/Removed a key in the table, DBreeze will automatically
block the whole table for Write, like SynchronizeTables(“t1”) would be used, till the end of the
transaction.
In following example, transaction first blocks table “t1” and then “t2”
tran.Insert<int,int>(“t2”,1,1);
Imagine, the we have parallel thread which writes in the same tables but in other sequence:
tran.Insert<int,int>(“t1”,1,1);
Thread 2 has blocked table “t2”, which is going to be read by Thread 1, and Thread 1 has
blocked table “t1”, which is going to be read by Thread 2.
Such situation is called deadlock.
Dbreeze automatically drops one of these threads with Deadlock Exception, and the other
thread will be able successfully finish its job.
But this is only a part of the solution. To make the program deadlock safe use in both
threads SynchronizeTables construction:
Thread 1:
tran.Insert<int,int>(“t1”,1,1);
tran.Insert<int,int>(“t2”,1,1);
Thread 2:
tran.Insert<int,int>(“t2”,1,1);
tran.Insert<int,int>(“t1”,1,1);
Both threads will be executed without exceptions, one by one - absolute defence from
the deadlock situation.
We didn’t use tran.SynchronizeTables construction and we didn’t write to this table before,
so we will see only last committed data, even if other thread is changing the same data in
parallel, this transaction will receive only last committed data for this table.
All reads of the table (only inside current transaction), if it’s in modification list (by
SynchronizeTables or just insert/update/remove) will return modified values even if the data
was not committed yet:
tran.RemoveKey(“t1”,1);
tran.Commit();
Because in NoSql concept we have to have deals with many tables inside of one
transaction, DBreeze has special constructions for tables locking. All these constructions are
available via tran.SynchronizeTables.
Again, tran.SynchronizeTables can be used only once inside of any transaction before any
modification command, but can be used after read commands:
ALLOWED:
tran.SynchronizeTable(ids);
tran.Insert<int,int>(“t1”,1,99);
tran.Commit();
}
Note, it’s possible to insert data into tables which were not synchronized by
SynchronizeTable
But this is better to use fo temporary tables, for avoiding deadlocks. To add uniqueness to
the table name (temporary table name) add ThreadId:
//in case if previous process was interrupted and tempTable was not
deleted
engine.Scheme.DeleteTable(tempTable);
tran.Insert<int,int>(tempTable ,1,99);
engine.Scheme.DeleteTable(tempTable);
tran.Commit();
}catch(System.Exception ex)
{
//ex handle
engine.Scheme.DeleteTable(tempTable);
}
}
NOT ALLOWED:
tran.SynchronizeTable(“t1”);
tran.Insert<int,int>(“t1”,1,99);
tran.Commit();
}
Articles123
Articles231
etc.
Articles123/SubItems123/SubItems123
and so on.
Articles123/Items1257/IOo4564
Articles123/SubItems546
tran.SynchronizeTable(“Articles$”) will mean that we block for writing following tables, like:
Articles123
Articles456
tran.SynchronizeTable(“Articles1/Items$”,”Articles#/SubItems*”,
“Price1”,”Price#/Categories#/El*”)
Non-Unique Keys
One of them is for every non-unique key create a separate table and store all reference to
this key inside. Sometimes this approach is good.
Note, that DBreeze is a professional database for high performance and mission-critical
applications. Developer spends a little bit more time for the Data Access Layer, but gets
back very fast responses from database.
Imagine that you have a plenty of Articles and every of it has price inside. You know that
one of the requirements of your application is to show articles sorted by price. Another
requirement is to show articles in price range.
It can mean that except the table who holds articles you will need a special table where you
will store prices as keys, to be able to use DBreeze SelectForwardStartFrom or
SelectForwardFromTo.
Developer, while inserting one article, has to fill two tables (it’s a minimum for this example)
Articles and Prices.
But how we can store prices as key - they are not unique.
using DBreeze;
using DBreeze.Utils;
using DBreeze.DataTypes;
public class Article
{
public Article()
{
Id = 0;
Name = String.Empty;
Price = 0f;
}
id++;
tran.Insert<uint, DbMJSON<Article>>("Articles", id, art);
id++;
tran.Insert<uint, DbMJSON<Article>>("Articles", id, art);
idAsByte = id.To_4_bytes_array_BigEndian();
priceKey = art.Price.To_4_bytes_array_BigEndian().Concat(idAsByte);
Console.WriteLine("{0}; Id: {1}; IdByte[]: {2}; btPriceKey: {3}", art.Name, id,
idAsByte.ToBytesString(""), priceKey.ToBytesString(""));
tran.Insert<byte[], byte[]>("Prices", priceKey, null);
id++;
tran.Insert<uint, DbMJSON<Article>>("Articles", id, art);
idAsByte = id.To_4_bytes_array_BigEndian();
priceKey = art.Price.To_4_bytes_array_BigEndian().Concat(idAsByte);
Console.WriteLine("{0}; Id: {1}; IdByte[]: {2}; btPriceKey: {3}", art.Name, id,
idAsByte.ToBytesString(""), priceKey.ToBytesString(""));
tran.Insert<byte[], byte[]>("Prices", priceKey, null);
id++;
tran.Insert<uint, DbMJSON<Article>>("Articles", id, art);
idAsByte = id.To_4_bytes_array_BigEndian();
priceKey = art.Price.To_4_bytes_array_BigEndian().Concat(idAsByte);
Console.WriteLine("{0}; Id: {1}; IdByte[]: {2}; btPriceKey: {3}", art.Name, id,
idAsByte.ToBytesString(""), priceKey.ToBytesString(""));
tran.Insert<byte[], byte[]>("Prices", priceKey, null);
//this article was added later and not reflected in the post explanation
art = new Article()
{
Name = "MousePad",
Price = 3.0f
};
id++;
tran.Insert<uint, DbMJSON<Article>>("Articles", id, art);
idAsByte = id.To_4_bytes_array_BigEndian();
priceKey = art.Price.To_4_bytes_array_BigEndian().Concat(idAsByte);
Console.WriteLine("{0}; Id: {1}; IdByte[]: {2}; btPriceKey: {3}", art.Name, id,
idAsByte.ToBytesString(""), priceKey.ToBytesString(""));
tran.Insert<byte[], byte[]>("Prices", priceKey, null);
tran.Commit();
Console.WriteLine("***********************************************");
byte[] searchKey =
price.To_4_bytes_array_BigEndian().Concat(fakeId.To_4_bytes_array_BigEndian());
Article art=null;
if (artRow.Exists)
{
art = artRow.Value.Get;
Console.WriteLine("Articel: {0}; Price: {1}", art.Name, art.Price);
}
}
Console.WriteLine("***********************************************");
//Fetching data >
using (var tran = engine.GetTransaction())
{
//We are intereste here in Articles with the cost > 10
byte[] searchKey =
price.To_4_bytes_array_BigEndian().Concat(fakeId.To_4_bytes_array_BigEndian());
if (artRow.Exists)
{
art = artRow.Value.Get;
Console.WriteLine("Articel: {0}; Price: {1}", art.Name, art.Price);
}
}
Every article when is inserted to Articles table receives its unique id ot type uint:
Articles<uint,DbMJSON<Article>>(“Articles”)
You remember that in namespace DBreeze.Utils there are a lot of extension for converting
different data types to byte[] and back. We can convert decimals, doubles, floats, integers
etc. to byte[] and back.
Article price is float in our example and can be converted to byte[4] (sortable byte array from
DBreeze.Utils, System.BitConverter will not give you such results).
As you see we had 4 articles 2 of them had the same price.
We achieve uniqueness of the price on the byte level by concatenating two byte array.
First part is a price converted to byte array (for Article Keyboard):
float 10.0f -> AE-0F-42-40
Second part is uint Id from table Articles converted to byte array (for Article Keyboard):
uint 2 -> 00-00-00-02
when we concatenate both byte arrays for every article we will have such result:
That’s all exactly these final byte arrays we insert into table prices.
Select Forward and Backward from table Prices will give you already sorted by price results.
then we need to concatenate with the btKey full article id and here is a trick:
uint id = 0;
btKey = btKey.Concat(id.To_4_bytes_array_BigEndian())
if we use it in tran.SelectForwardStartFrom(“Prices”,btKey,true);
If we
uint id = UInt32.MaxValue;
btKey = btKey.Concat(id.To_4_bytes_array_BigEndian())
will give us such btKey:
AE-0F-42-40-FF-FF-FF-FF
Sure when you got the key from value price (it’s byte[]), you can make
row.Value.Substring(4,4).To_UInt32_BigEndian() - receive you uint id from table Articles and
retrieve value from table Articles by this key.
[20120521]
Actually, it’s an ability to store in any kind of a value (of a Key/Value table) from 1 to N other
tables + extra data. And in any kind of a nested table keys values other from 1 to N tables +
extra data and so on, till you resources let you do that. Such multi-dimensional storage
concept.
It can also mean that in one value we can store object of any complexity kind. Every property
of this object which can be represented as a table (List or Dictionary) inherits all possibilities
of the master table. We can make again favorite operations like Forward, Backward Skip,
Remove, Add etc. and the same with sub-nested tables and sub-sub....-sub nested tables.
Table "t1"
Key | Value
1 | /...64 byte..../ /...64 byte..../ /...64 byte..../
Key<int>Value Key<string> Value
1 /...64 byte..../ a5 /...64 byte..../ /...64 byte..../
2 /...64 byte....//...64 byte..../ b6 string
3 t7 int
h8 long
2 | /...64 byte..../
3 | /...64 byte....//...64 byte..../ extra data /...64 byte..../ extra data /...64 byte..../
Note, it’s not possible to copy the table which has in values nested tables with the
techniques described before (simple bytes copying). But it is possible to automate this
process, because the table root has a mark “dbreeze.tiesky.com” always starting at the
same point from table root start, also the root length is fixed with 64 bytes, so one day we
will make this recursive copy function.
Note, we are still thinking about the methods names which we use while fetching nested
tables and we know that the time will place correct emphasis here also.
Every operation starts from the master table. Master table is a table which is stored in the
Scheme and you perfectly know its name.
tran.Insert<int,string>(“t1”,1,”Hello”);
tran.Insert<int,string>(“t1/Points”,1,”HelloAgain”);
So, lets assume we have master table with the name “t1”. Keys of this table are of integer
type. Values can be different.
If you know what is stored under different keys you can always correctly fetch the values, on
the lowest level they are always byte[] - byte array.
To insert a table we have designed new method
tran.InsertTable<int>("t1", 4, 0);
you need to supply one type for key resolving, value will be automatically resolved as byte
array. As parameters you need to supply master table name, key (4 in our example) and
table index.
As you remember we can put more then 1 table in the value and every of it will reside 64
bytes.
So, if index = 0 then table will reside value bytes from 0-63, if index = 1 then table will reside
value bytes from 64-127 etc....
In between you can put your own values, just remember not to overlap nested tables roots.
tran.InsertTable<int>("t1", 4, 0);
tran.InsertPart<int, int>("t1", 4, 587, 64);
Key 4 will have 64 bytes of a table and then 4 reserved bytes for the value 587. You can
work separately with them.
Note, method InsertTable gives us extra load telling that we want to insert/change/modify. If
the table didn’t exist in that place it will be automatically created. Also Insert Table will
notify the system that thread, who is using it, tries to modify table “t1”, that’s why all
necessary techniques like tran.SynchronizeTables, if you modify more then one master
table, must be used. They are described in previous chapters.
tran.SelectTable<int>("t1", 4, 0);
Note, method SelectTable will not create table if it doesn’t exist and this method is
recommended for READING THREADS. But also can be used by WRITING threads just to
get the table without its creation.
NestedTable repeats by functionality Transaction class in the scope of table operations. You
will find there all well known methods: Select SelectForward Backward, Insert, InsertPart,
RemoveKey, RemoveAll etc.
First difference is that you don’t need to supply table name as parameter.
Key Value
1
2
3
4 /*....64 byte...table*/ /*4 bytes integer*/
Key Value
1 Hi1
2 Hi2
3 Hi3
tran
.InsertTable<int>("t1", 4, 0)
.Insert<int, string>(1, "Hi1")
.Insert<int, string>(2, "Hi2")
.Insert<int, string>(3, "Hi3");
tran.Commit();
This “functional programming” technique is possible due to returns of Insert - It returns the
underlying NestedTable.
tran
.SelectTable<int>("t1", 4, 0)
.Select<int, string>(1)
.PrintOut();
Lets iterate
foreach (var row in tran
.SelectTable<int>("t1", 4, 0)
.SelectForward<int, string>()
)
{
row.PrintOut();
}
Note, if you try to Insert into nested table after master-SelectTable you will receive an
exception. Inserting (Removing, changing - etc all modifications) into all nested tables
generations is allowed only starting from master- InsertTable method.
Key Value
1
2
3
4 /*....64 byte...table*/
Key Value
1 Hi1
2 /*....64 byte...table*/ /*....64 byte...table*/
Key Value Key Value
1 Xi1 7 Piar7
2 Xi2 8 Piar8
3 Hi3
var horizontal =
tran
.InsertTable<int>("t1", 4, 0);
horizontal
.GetTable<int>(2, 0) //we use it to access next table genearation
.Insert(1, "Xi1")
.Insert(2, "Xi2");
horizontal
.GetTable<int>(2, 1)
.Insert(7, "Piar7")
.Insert(8, "Piar8");
//Fetching value
tran.SelectTable<int>("t1", 4, 0)
.GetTable<int>(2, 1)
.Select<int, string>(7)
.PrintOut();
Note, there is no separate Commit or Rollback of the nested tables they are done via master
table Commit or Rollback.
[20120525]
This Row we know from previous examples, but now it’s enhanced with new method
GetTable(uint tableIndex), where you can get nested table stored inside of this row by
tableIndex. It works for master and for nested tables.
tran.Commit();
//Result will be
1; “Test1”
2; “Test2”
3; “Test3”
We have created extra insert and select statements for master table and nested table to
support direct casts of the DBreeze tables as a C# Dictionary and HashSet (list of unique
keys).
tran.Commit();
_b = tran.SelectTable<int>("t1",15,0)
.SelectDictionary<int, uint, string>(10, 0);
“t1”
Key<int> Value<byte[]>
1
2
..
10 /*0-63 bytes new table*/
Key<uint> Value<string>
10 “Hello, my friends”
11 “Sehr gut!”
tran
.InsertTable<int>("t1",15,0)
.InsertDictionary<int, uint, string>(10, _d, 0,true);
“t1”
Key<int> Value<byte[]>
1
2
..
15 /*0-63 bytes new table*/
Key<int> Value<byte[]>
... ...
10 /*0-63 bytes new table*/
Key<uint> Value<string>
10 “Hello, my friends”
11 “Sehr gut!”
Select will be used to get these values, Hashset has the same semantic.
Note, there is one important flag in InsertDictionary and InsertHashSet. It’s last parameter
bool withValuesRemove.
If you supplied before Dictionary with keys 1,2,3....commit......then next time you supply
Dictionary with values 2,3,4
if withValuesRemove = true
then in db will stay keys 2,3,4
if withValuesRemove = false
then in db will stay keys 1,2,3,4
- The quick method to store a set of keys/values into the nested tables from Dictionary or
HashSet (InsertDictionary(....,false)).
- Help functions for small Dictionaries/HashSets to be stored and Selected with automatic
removal and update (InsertDictionary(....,true)).
- Abillity to get the full table of any Key/Value type as Dictionary or HashSet - right in
memory.
[20120526]
We have also added Insert/Select Dictionary/HashSet for the tables themselves (not just
moved by levels)
inserting into t1 row 1 a table which locates from 0 byte of row a Dictionary:
Corresponding selects:
tran.SelectDictionary<int, int>("t1");
tran.SelectTable<int>("t1", 1, 0).SelectDictionary<uint, uint>();
[20120529]
We have a situation of memory growth in case if we use lot’s of nested tables inside
of one transaction. Support of a table takes a memory amount.
Master table and nested into it tables share the same physical file. Current engine
automatically disposes master table and all nested tables when transaction (working with
master table) is finished. But only in case when parallel threads don’t read from the same
table in the same time. Master table and nested into it tables will be disposed together with
the last working with this table transaction. If we write into the table once per 7 seconds and
read once per 2 seconds, definitely this table will be able to free residing memory
in-between.
Some more situations. For example we insert data in such manner:
tran.Commit();
}
Really bad case for the memory. In this case we have to open 100000+1(master) tables and
hold them in memory till tran.Commit();
In our tests used memory has grown up from 30MB (basic run of a test program) up to
350MB...after transaction was finished the process size didn’t change, but those 320MB
were marked to be collected by .NET Garbage Collector, so calling GC.Collect (or using the
process further) brings back to 30MB.
And for now it’s hard to find out the ways how to avoid this memory growth. It’s not so critical
when you insert in small chunks (100 records). So you must remember about that.
Another case:
Here, after every loop iteration we don’t need any more used table, but it still stays in
memory and make it growing. In this example memory has grown up from 30MB up to
135MB, sure if you select more records it will need more memory resource.
Exactly for such case we had to integrate table.Close method.
To use Close, we need a variable for accessing this table. Our code will look like this now:
if (!tbl.Select<uint, uint>(1).Exists)
{
Console.WriteLine("not");
}
tbl.CloseTable();
}
}
Note, When we call NestedTable.Close, method, we want to close current table and all
nested in it tables. Every master-table InsertTable or SelectTable (and
nestedTable.GetTable) increase “open quantity” variable by 1, every CloseTable decreses
value by 1, when value is less then 1, then the table with all nested in it tables will be
closed.
If we forget to close table then it will be open till all operations with master table are finished
and automatic dispose works.
If we need to support other indices then our table key, where we store our objects we need
to create other tables where keys will be secondary index etc. In secondary index table we
can store direct pointer the first table with the object in contrast with the key.
When we insert or change the key we have an ability to obtain its file pointer:
then we can get the value by pointer economizing time for the search of the first table key.
Note, when we update primary-table, who holds full information about the object, it’s pointer
can be moved, that’s why our DAL must update value (pointer to the primary table key) in
the secondary table also. When we delete from primary table, we must delete in the same
transaction from secondary index table also.
Note, for nested tables SelectDirect must be used exactly from the table where you are
searching information to avoid collisions:
Note, we can get pointer to the value inside of Insert, InsertPart and ChangeKey for
primary and nested tables.
[20120601]
Inside of the table we have key and value. If to think about the value as row with columns,
that gives us ability to store in one row independent data types, which we can access using
Row.GetValuePart(uint startIndex, uint length) and everything seems to be good, when our
data types have fixed length. But sometimes we need to store inside of columns
dynamic-length data structures.
For this we have developed following method inside of the transaction class:
Data blocks live in parallel with the table itself and inherit the same data visibility behaviour
for different threads like other structures.
Nested tables also have InsertDataBlock method.
Note, InsertDataBlock always return byte[] of the same length - 16 bytes - it’s a definition of
the stored value, because returned value length is fixed we can use it as column inside of a
Row.
Note, if 2 parameter initialPointer is NULL then new data block will be created for the table, if
not NULL it can mean that such data block already exists and DBreeze will try to overwrite it.
Note, data-blocks obey transaction rules, so till you commit “updated” data-block, parallel
reading threads will continue to see its last-committed value. We can also rollback changes.
After we insert data-block we want to store its pointer inside of a row, to have an ability to get
it later:
here we have received data-block pointer and we want to store this pointer in t1 row
we have stored pointer to the data-block inside of “t1” key (17) starting from index 10, pointer
has always fixed length 16 byte, starting from index 26 we can go on to store other values.
Updated:
Also, we can now directly get DataBlocks from transaction:
//When datablock is saved in master table
tran.SelectDataBlock("t1",dataBlockPointer);
//When datablock is saved in nested table
tran.SelectTable<int>("t1",1,0).SelectDataBlock(dataBlockPointer)
If we want to store link to the data-block inside of nested table row, we must make it via
Nested Table method:
tran.Commit();
tbl.CloseTable();
if (fr == null)
Console.WriteLine("T1 NULL");
else
Console.WriteLine("T1 " + fr.ToBytesString());
System understands empty pointers to the data-block. In following example we try to get
not-existing data-block, then update it and write pointer back:
tran.Commit();
Hash Functions of common usage. Fast access to long strings and byte arrays.
Liana-Trie.
DBreeze search-trie is a variation of radix trie, optimized by all parameters - ©
So, if we have keys of type int (4 bytes), we will need from 1 up to 4 HDD hits to get random
key (we don’t talk about HDD possible problems and OS file system fragmentations here). If
we have keys of type long (8 bytes) we will need from 1 up to 8 hits, depending upon keys
quantity and character. If we store longer byte arrays, we will need from 1 up to max-length
of the biggest key hits. If we store in one table 4 such string keys:
key1: http://google.com/hi
key2: http://google.com/bye
key3: http://dbreeze.tiesky.com
key4: abrakadabra
(after you find a key in range selects, searching of others, inside of iteration will work fast)
So, if we need to use StartsWith, or we need sorting of such table, we have to store keys like
they are.
But if we need just random access to such keys, the best approach will be to store not the
full keys but only their 4/8 or 16 bytes HASH-CODES. Also, hashed keys and values with
direct physical pointers, can represent secondary index. For example, in first table we store
keys, like they are, with the content and in second table we store hashes of those keys and
physical pointers to the first table. Now we can get sorted view and have fastest random
access (from 1 up to 8 hits, if hash is of 8 bytes).
Hashes can have collisions. We have integrated into DBreeze sources MurMurHash3
algorithm (which returns back 4 bytes hash) and added two more functions to get 8 bytes
and 16 bytes hash code. We recommend to use those 8 bytes or 16 bytes functions to stay
collision-safe with a very high probability. If you need 1000% guarantee, use nested table
under every hash and store in it real key (or keys in case of collisions), for checking or some
kind of other technique, like serialized list of keys with the same hash code.
[20120628]
Row has property LinkToValue (actually it’s a link to Key/Value), for getting direct link to the
row and using it together with SelectDirect. All links (pointers to key/value pairs) now return
fixed 8 bytes and can be stored as virtual column in rows.
DBreezeDataFolderName = @"D:\temp\DBreezeTest\DBR1",
};
If you have existing databases you can make its full copy (“snapshot”) and start to continue
to work with the incremental backup option switched on. Backup will create once per
“IncrementalBackupFileIntervalMin “ a new file (old files are released and can be copied out
and deleted). Current backup file is always locked by dbreeze. You have to specify folder for
dbreeze incremental backup files “BackupFolderName”. That’s all.
If you start new database with incremental backup option, then later you will be able to
recreate the whole db from backup files, if you have started from a “snapshot” then backup
files can bring your “snapshot“ to current db state.
You can restore backup in the folder where your snapshot resides or, if incremental backup
was switched on from the beginning, into the empty folder.
Switched on incremental backup option brings to Write speed decrease, Read speed is
untouched.
Inserting one million of integers without backup option - 9 sec with option - 17 sec.
[20120922]
After attaching new DBreeze and recompilation of the project you will see errors, because
such functions don’t exist any more in DBreeze.
Why?
It’s an issue, historical issue. Our DBreeze generic type converter (we use it in
tran.Insert<DateTime,DateTime .. tran.InsertPart<DateTime etc.) was written before some
ByteProcessingUtils functions and somehow DateTime was converted first to ulong and
then to byte[]. Otherwise, To_DateTime_BigEndian() and To_8_bytes_array_BigEndian()
from DBreeze.Utils used long, such unpleasant thing.
But, if you have already used manual DateTime conversions, we have left two functions for
compatibility:
DateTime To_DateTime_zCompatibility(this byte[] value) (this you can use instead of old
To_DateTime_BigEndian)
For the last some months we have created many tables with different value configurations,
combining ways of the data storage. One of the most popular way is handling value byte[] as
set of columns of fixed length. We found out that we have lack of null-able data types and for
this we have added in DBreeze.Utils.ByteProcessing a range of extensions for all standard
null-able data types:
You take any standard null-able data type int?, bool?, DateTime?, decimal?, float? uint? etc.
and convert it into byte[] using DBreeze.Utils extensions:
Note, that practically all null-able converters create byte[] on 1 byte longer then not null-able.
Sometimes in one value we hold some columns of fixed length then some DataBlocks, which
represent pictures or so and then DataBlocks which represent big-text or json - serialized
object parts. But we found out, that we miss storing of text in the way, like standard RDBMS
make that: nvarchar(50) NULL or varchar(75). Sure we can use DataBlocks for that, but
sometimes we don’t want it, especially having that DataBlock reference will reside 16 bytes.
and
They both will emulate behaviour of RDBMS text fields of the fixed reservation length.
Maximum 32KB. Minimum 1 byte for ASCII text and 4 bytes for UTF-8 text.
and you will receive byte array of 50+2 = 52 bytes this you can store in your value from
specific place (let’s say 10).
Note, returned size will be always 2 bytes longer we need them to store length of the real
text inside of the fixed-size array and NULL flag.
Sometimes, it’s very useful as a first byte of the value to setup a row version, then,
depending upon this version, the further content of the value can have different
configurations of the content.
[20121012]
Let’s assume that before every following example, we delete table “t1” and then execute
such insert:
tran.Commit();
}
Sometimes it’s interesting for us to make table modifications while iteration, like here:
tran.Commit();
}
tran.SynchronizeTables("t1");
while (en.MoveNext())
{
tran.RemoveKey<int>("t1", en.Current.Key);
}
tran.Commit();
}
Enumerator en, refers to writing root at this moment, because our table was added into
modification list (by SynchronizeTable or any other modification command, like insert,
remove etc...), and changes of the table, even before committing, can be reflected inside the
enumerator.
But, we delete the same key which we read, that’s why this task will be accomplished good.
We don’t insert or delete “elements of the future iterations”.
tran.SynchronizeTables("t1");
int pq = 799999;
while (en.MoveNext())
{
tran.RemoveKey<int>("t1", pq);
pq--;
}
tran.Commit();
}
We will not delete all keys in the previous example. Enumerator will stop to iterate
somewhere in the middle, where exactly - depends upon key structure and not really useful
for us.
So, if you are going to iterate something and change possible “elements of the future
iterations”, there is no guarantee for the correct logic execution. This concerns synchronized
iterators.
To make it correct, we have added for every range select function an overload with the
parameter bool AsReadVisibilityScope. It concerns nested tables range select functions
also.
tran.SynchronizeTables("t1");
int pq = 799999;
while (en.MoveNext())
{
tran.RemoveKey<int>("t1", pq);
pq--;
}
tran.Commit();
}
All keys will be deleted correctly. Because our enumerator’s visibility scope will be the same
as in parallel thread, so it will see only committed data projection, before the start of the
current transaction.
Now we can vary which visibility scope for the enumerator, whose table is inside of
modification list, we want to choose, synchronized or parallel. Default range selects, without
extra parameter, if table is in modification list will return synchronized view.
[20121015]
[SecondaryKey]
public float Price = 15f;
}
Primary and Secondary keys attributes, for now, don’t exist in DBreeze. But idea is following:
from field “Id” we want to make Primary index/key and from field “Price” we want to create
one of our secondary indexes.
For now DBreeze doesn’t have extra object layer, so we would make such save in the
following format:
using DBreeze;
using DBreeze.Utils;
tran.Insert<byte[],byte[]>
(“ArticleIndexPrice”,
a.Price.To_4_bytes_array_BigEndian() //compound key: price+Id
.Concat(
a.Id.To_8_bytes_array_BigEndian()
,
ptr //value is a pointer to the primary table
);
)
tran.Commit();
}
}
Something like this. In the real life all primary and secondary indexes could be packed into
the nested tables of one MasterTable under different keys.
We have filled 2 tables. First is “Article”. As key there we store Article.Id as value we store
article name and price. Second table is “ArticleIndexPrice”. Its key is constructed from
(float)Price+(long)ArticleId - it’s unique, sortable, comparable and searchable. Such
technique was described in previous articles. As a value we store physical pointer to the
primary key inside of the “Article” table. When we have such physical pointer, searching of
Key/Value of the PrimaryTable “Article” is only one HDD hit.
But keys and values are not always static. Sometimes we remove articles, sometimes we
change the price or even expand the value (in the last case, we need to save new physical
pointer into secondary index table).
If we remove Article, we must remove compound key from the table “ArticleIndexPrice” also.
When we update price, inside of table Article, we must delete old compound key from the
table “ArticleIndexPrice” and create new one.
It means, that every time when we insert something into table Article - it can be counted as a
probable update, and we must check, if row with such Id exists before insert. If yes then we
must read it, delete compound key, construct and insert new compound key into the table
“ArticleIndexPrice” and finally update value in the table “Article”.
That’s why we have added for every modification command, inside of the transaction class
and nested table class, useful overloads:
public void Insert<TKey, TValue>(string tableName, TKey key, TValue value, out byte[]
refToInsertedValue, out bool WasUpdated)
public void InsertPart<TKey, TValue>(string tableName, TKey key, TValue value, uint
startIndex, out byte[] refToInsertedValue, out bool WasUpdated)
public void ChangeKey<TKey>(string tableName, TKey oldKey, TKey newKey, out byte[]
ptrToNewKey, out bool WasChanged)
Actually, Dbreeze, when inserts data, knows, if it’s going to be an update or new insert.
That’s why Dbreeze can notify us about this.
We go on to insert data in usual manner. If flag WasUpdated equals to true, then we know
that it was an update. We can use our new, overloaded with visibility scope parameter,
Select to get key/value pair, which was before modification and change secondary index
table. We need to make this action only in case of update/remove/change command, but not
in case of the new insert.
[20121016]
If we store inside of value DataBlocks (not just serialized value or columns of fixed length),
before we make an update of such value, we must read it in any case previous value content
(to get DataBlocks initial pointers for updates). So, again every insert can be counted as
probable update. Following technique/benchmark shows us time consumption for reading
previous row value version before insert:
Operation took 9300 ms (9 sec 2012y, 1.5 sec 2015y). 1 MLN of inserts.
DBreeze.Diagnostic.SpeedStatistic.StartCounter("a");
tran.Commit();
}
This operation took 10600 ms (10 sec). 1 MLN of inserts, distinguishing between updates
and inserts.
Remember, that DBreeze insert and select algorithms work with maximum efficiency
in bulk operations, when keys are supplied sorted in ascending order (descending is
a bit slower). So, sort bulk chunks in memory before inserts/selects.
Previous 2 examples were about pure inserts, and we run them again having data already in
the table, so all records have to be updated:
[20121023]
Dbreeze can reside also fully in-memory. It’s just a feature. Having the same functionality as
disk-based version.
Instantiating example:
Console.WriteLine(tran.Count("t1"));
tran.Commit();
It works a bit slower then .NET Dictionary or SortedDictionary, because has lots of
sub-systems inside, which must be supported, and designed to work with very large data
sets, without index fragmentation after continuous inserts, updates and deletes.
We have increased standard bulk insert speed of DBreeze (about 5 times), by adding a
special memory cache layer before flushing data on the disk. By standard configuration, 20
tables, which are written in parallel, receive such memory buffer of size 1MB each, before
disk flush. The 21-th (and so on, parallel) will write without buffer. After disposing of the
writing transactions other tables can receive such buffer, so it’s not bound to the tables
names - tables are chosen automatically right in time of the insert.
Now DBreeze, in standard configuration, can store in bulk (ascending ordered) 500K records
per 1 seconds (Benchmark PC is taken). 6 parallel threads could write into 6 different tables
1MLN of records each, for the 3.4 seconds, what was about 40MB/s and 1.7 MLN simple
records per second (see Benchmarking document).
[20121101]
"check”
"sam"
"slash”
"slam"
"what"
string prefix = "slap";
Result:
slam
slash
and for
foreach (var row in tran.SelectBackwardStartsWithClosestToPrefix<string, byte>("t1", prefix))
Result:
slash
slam
[20121111]
Starting from current DBreeze version we are able to set up tables locations by table names
patterns globally. We can mix tables physical locations inside of one DBreeze instance.
Tables can reside in different folders, on different hard drives and even in memory.
//SETTING UP THAT ALL TABLES STARTING FROM “mem_” must reside in-memory
conf.AlternativeTablesLocations.Add("mem_*", String.Empty);
So, if value of the Dictionary AlternativeTablesLocations key is empty, table will be
automatically forced to work in-memory. If pattern for the table is not found, table will be
created, overriding DBreeze main configuration settings (DBreezeDataFolderName and
StorageType).
If one table corresponds to some patterns, the first one will be taken.
* - 1 or more of any symbol kind (every symbol after * will be cutted): Items* U
Items123/Pictures etc...
# - symbols (except slash) followed by slash and minimum another symbol: Items#/Picture U
Items123/Picture
$ - 1 or more symbols except slash (every symbol after $ will be cutted): Items$ U Items123;
Items$ !U Items123/Pictures
Incremental backup restorer works on the file level and knows nothing about user’s logical
table names. It will restore all tables in one specified folder. Later, after starting DBreeze and
reading the scheme, it’s possible manually to reside disk table files into corresponded
physical places due to the storage logic.
[20130529]
Speeding up batch modifications (updates, random inserts)
To economize disk space DBreeze tries to utilize the same HDD space, if it’s possible, in
case of different types of updates.
There are 3 places where updates are possible:
- Update of search trie nodes (LianaTrie nodes)
- Update of Key/Values
- Update of DataBlocks
To be sure that overwriting data file will not be corrupted in case of power loss, first we have
to write data into rollback file, then into data file. DBreeze in standard mode excludes any
OS intermediate cache (only internal DBreeze cache) and makes writes to the “bare metal”.
Today’s HDDs and even SSDs are quite slow for the random write. That’s why we use a
technique of changing random writes into sequential writes.
When we use DBreeze, for standard data accumulation of the random data from different
sources, inside of small transactions, the speed degrade is not so visible. But we can see it
very good when we need to update a batch of specific data.
We DON’T SEE SPEED DEGRADE, when we insert batch of growing up keys - any newly
inserted key is always bigger than maximal existing key (SelectForward will return newly
inserted key as the last one). For such case we should do nothing.
We CAN SEE SPEED DEGRADE, when we update batch of values or data-blocks or if we
insert a batch of keys in random order and, especially, if these keys have high entropy.
For such cases we have integrated new methods for transactions and for nested tables:
tran.Technical_SetTable_OverwriteIsNotAllowed(”t1”);
or
- This technique is interesting for the transactions with specific batch modifications,
where speed really matters. Only developer can answer this question and find a
balance.
- This technique is not interesting for the memory based data stores.
- These methods work only inside of one transaction and must be called for every
table or nested table separately, before table modification command.
- When new transaction starts, overwrite automatically will be allowed again for all
tables and nested tables.
- Overwriting concerns all: search trie nodes, values and data blocks.
- Remember always to sort batch ascending by key, before insert - it will economize
HDD space.
Of course this technique makes data file bigger, but it returns the desired speed. All data
which could be overwritten will be written to the end of the file.
Note
DBreeze version for .NET35 can be used only under Windows, cause utilizes system
API FlushFileBuffers from kernel32.dll
DBreeze version for .NET40 doesn’t use any system API functions and can be used
under Linux MONO and under .NET 4>. For Windows, be sure to have latests .NET
Framework starting from 4.5, because there Microsoft has fixed bug with
FileStream.Flush(true).
[20130608]
Starting from DBreeze version 01.052 we can restore table from the other source table on
the fly.
//Copying from main engine (Table t1) to engine2 (table “t1”), with changing all values to 2
tran2.Commit();
}
engine2.Dispose();
//engine2 is fully closed.
//moving table from engine2 (physical name) to main engine (logical name)
tran.RestoreTableFromTheOtherFile("t1", @"D:\temp\DBreezeTest\DBR2\10000000");
//Point555
}
//Checking
Up to point555 everything was ok, while copying data from one engine into another, parallel
threads could read data from table “t1” of the main engine, parallel writing threads of course
were blocked by tran.SynchronizeTables("t1"); command.
Startign from point555 some parallel threads which were reading table “t1” could have in
memory reference to the old physical file, reading values from such references can bring to
DBreeze TABLE_WAS_CHANGED_LINKS_ARE_NOT_ACTUAL exception.
[20130613]
Parallel threads can open transactions and in parallel read the same tables, in our standard
configuration. For writing threads we use tran.SynchronizeTables command to sequentialize
writing threads access to the tables.
But what if we want to block access to the tables even in parallel reading threads, while
modification commands of our current transaction are not yet finished?
Inside of such transaction we want to define the lock type for the listed tables.
Note, we must use either first transaction type (engine.GetTransaction()) or new type
(with SHARED/EXCLUSIVE) for the same tables among the whole program.
Example of usage:
Thread.Sleep(2000);
}
}
//This must be used in any case, when Shared threads can have parallel writes
tran.SynchronizeTables("t1");
using DBreeze.Utils.Async;
Action t2 = () =>
{
ExecF_003_2();
};
t2.DoAsync();
Action t1 = () =>
{
ExecF_003_1();
};
t1.DoAsync();
Action t3 = () =>
{
ExecF_003_3();
};
t3.DoAsync();
}
This approach is good for avoiding transaction exceptions, in case of data compaction or
removing keys with file re-creation, described in previous chapter.
[20130811]
Remove KeyValue and get deleted value and notification if value existed in one round.
For this we have added overload in Master and in Nested tables: RemoveKey<TKey>(string
tableName, TKey key, out bool WasRemoved, out byte[] deletedValue)
[20130812]
Insert key overload for Master and Nested table, letting not to overwrite key if it
already exists.
public void Insert<TKey, TValue>(string tableName, TKey key, TValue value, out byte[]
refToInsertedValue, out bool WasUpdated, bool dontUpdateIfExists)
WasUpdated will become true, if value exists, and false if such value is not in DB.
dontUpdateIfExists, equal to true, will not give DB to make an update.
DBreeze uses lazy value loading technique. For example, we can say
var row = transaction.Select<int,int>(“t1”,1);
at this moment we receive a row. We know that such row exists by row.Exists property and
we know its key by row.Key property. At this moment value is still not taken into memory
from disk. It will be read out from DB only when we instruct row.Value.
Sometimes it is good, when for us the only key is enough. Such cases can happen when we
store secondary index and the link, to the primary table, as a part of the key. Or if we have
“multiple columns” in one row. We need to get only one column and don’t need to get
complete, probably huge, value.
Nevertheless, lazy load will work a bit slower, in compare with getting key and value in one
round, due to extra HDD hits.
[20140603]
Starting from now we can bind any byte[] serializer/deserializer to DBreeze in following
manner:
This declaration must be done right after DBreeze instantiation, before its real usage.
DBreeze.Utils.CustomSerializator.ByteArraySerializator = SerializeProtobuf;
DBreeze.Utils.CustomSerializator.ByteArrayDeSerializator = DeserializeProtobuf;
where...
We use mostly Protobuf.NET serializer in our projects. So example will be done also with
Protobuf. Get it via Nuget or make reference to it (protobuf-net.dll) .
Here are custom wrapping functions for Protobuf:
ret = ProtoBuf.Serializer.Deserialize<T>(ms);
ms.Close();
}
return ret;
}
return ret;
}
public static byte[] SerializeProtobuf(this object data)
{
byte[] bt = null;
using (System.IO.MemoryStream ms = new System.IO.MemoryStream())
{
ProtoBuf.Serializer.NonGeneric.Serialize(ms, data);
bt = ms.ToArray();
ms.Close();
}
return bt;
}
Now let’s prepare an object for storing in DBreeze, decorated with Protobuf attributes (extra
documentation about protobuf can be found on its website):
[ProtoBuf.ProtoContract]
public class XYZ
{
public XYZ()
{
P1 = 12;
P2 = "sdfs";
}
//!!! NOTE better to assign row.Value to “obj” and then use “obj” among the program.
//Calling row.Value causes to rereading data from the table in case of default
//ValueLazyLoadingIsOn
}
}
[20160304]
Example of DBreeze initialization for UWP Universal Windows Platform.
string dbr_path =
System.IO.Path.Combine(Windows.Storage.ApplicationData.Current.LocalFolder.Path, "db");
Task.Run(() =>
{
//System.Diagnostics.Debug.WriteLine(dbr_path );
if (engine == null)
engine = new DBreezeEngine(dbr_path );
[20160320]
In this guide we will create customers, prototypes of business orders for these customers
and determine different search functions.
Let's create WinForm application, add NuGet reference to protobuf-net and DBreeze. On the
form create a button and replace code of the form with this one:
]
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using DBreeze;
using DBreeze.Utils;
namespace DBreezeQuickStart
{
void InitDb()
{
if (engine == null)
{
engine = new DBreezeEngine(new DBreezeConfiguration { DBreezeDataFolderName =
@"S:\temp\DBreezeTest\DBR1" });
//engine = new DBreezeEngine(new DBreezeConfiguration { DBreezeDataFolderName =
@"C:\temp" });
[ProtoBuf.ProtoContract]
public class Customer
{
[ProtoBuf.ProtoMember(1, IsRequired = true)]
public long Id { get; set; }
/// <summary>
/// Order datetime creation
/// </summary>
[ProtoBuf.ProtoMember(3, IsRequired = true)]
public DateTime udtCreated { get; set; }
}
/// <summary>
/// -------------------------------------- STARTING TEST HERE -------------------------------------
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void button1_Click(object sender, EventArgs e)
{
//One time db init
this.InitDb();
//Simple test
////Test insert
//using (var tran = engine.GetTransaction())
//{
// tran.Insert<int, int>("t1", 1, 1);
// tran.Insert<int, int>("t1", 1, 2);
// tran.Commit();
//}
////Test select
//using (var tran = engine.GetTransaction())
//{
// var xrow = tran.Select<int, int>("t1",1);
// if (xrow.Exists)
// {
// Console.WriteLine(xrow.Key.ToString() + xrow.Value.ToString());
// }
// //or
// foreach (var row in tran.SelectForward<int, int>("t1"))
// {
// Console.WriteLine(row.Value);
// }
//}
//Inserting CustomerId 1
var customer = new Customer() { Name = "Tino Zanner" };
Test_InsertCustomer(customer);
//Inserting CustomerId 2
customer = new Customer() { Name = "Michael Hinze" };
Test_InsertCustomer(customer);
/*
Result:
Inserted CustomerId: 1, Name: Tino Zanner
Inserted CustomerId: 2, Name: Michael Hinze
All orders
28.08.2015 07:15:57.734 orderId: 1
28.08.2015 07:15:57.740 orderId: 2
28.08.2015 07:15:57.743 orderId: 3
28.08.2015 07:15:57.743 orderId: 4
28.08.2015 07:15:57.743 orderId: 5
28.08.2015 07:15:57.757 orderId: 6
28.08.2015 07:15:57.758 orderId: 7
28.08.2015 07:15:57.758 orderId: 8
28.08.2015 07:15:57.759 orderId: 9
28.08.2015 07:15:57.759 orderId: 10
28.08.2015 07:15:57.759 orderId: 11
28.08.2015 07:15:57.760 orderId: 12
28.08.2015 07:15:57.760 orderId: 13
Orders of customer 1
28.08.2015 07:15:57.734 orderId: 1
28.08.2015 07:15:57.740 orderId: 2
28.08.2015 07:15:57.743 orderId: 3
28.08.2015 07:15:57.743 orderId: 4
28.08.2015 07:15:57.743 orderId: 5
Orders of customer 2
28.08.2015 07:15:57.757 orderId: 6
28.08.2015 07:15:57.758 orderId: 7
28.08.2015 07:15:57.758 orderId: 8
28.08.2015 07:15:57.759 orderId: 9
28.08.2015 07:15:57.759 orderId: 10
28.08.2015 07:15:57.759 orderId: 11
28.08.2015 07:15:57.760 orderId: 12
28.08.2015 07:15:57.760 orderId: 13
*/
return;
}
/// <summary>
///
/// </summary>
/// <param name="cust"></param>
void Test_InsertCustomer(Customer cust)
{
try
{
using (var tran = engine.GetTransaction())
{
//We don't need this line because we write only into one root table.
//Add more table names for safe transaction operations among multiple
//root tables (read docu)
tran.SynchronizeTables("Customers");
//In table Customers under key 1 we will have nested table with customers
var tbl = tran.InsertTable<int>("Customers", 1, 0);
//Under index 2 we will have monotonically grown id
//Committing entry
tran.Commit();
}
}
/// <summary>
///
/// </summary>
/// <param name="order"></param>
void Test_InsertOrder(Order order)
{
try
{
/*
In our case, we will store orders of all customers in one table "Orders".
Of course we could create for every customer his own table, like Order1, Order2...etc
*/
using (var tran = engine.GetTransaction())
{
//We don't need this line because we write only into one root table.
//Add more table names for safe transaction operations among multiple
//root tables (read docu)
tran.SynchronizeTables("Orders");
//Committing entry
tran.Commit();
}
}
catch (Exception)
{
throw;
}
}
/// <summary>
///
/// </summary>
/// <param name="order"></param>
void Test_InsertOrders(IEnumerable<Order> orders)
{
try
{
/*
In our case, we will store orders of all customers in one table "Orders".
Of course we could create for every customer his own table, like Order1, Order2...etc
*/
using (var tran = engine.GetTransaction())
{
//We don't need this line because we write only into one root table.
//Add more table names for safe transaction operations among multiple
//root tables (read docu)
tran.SynchronizeTables("Orders");
throw;
}
}
/// <summary>
///
/// </summary>
/// <param name="from"></param>
/// <param name="to"></param>
void Test_GetOrdersByDateTime(DateTime from, DateTime to)
{
try
{
using (var tran = engine.GetTransaction())
{
var tbl = tran.SelectTable<int>("Orders", 1, 0);
var tblDateIndex = tran.SelectTable<int>("Orders", 3, 0);
byte[] keyFrom =
from.To_8_bytes_array().Concat(long.M inValue.To_8_bytes_array_BigEndian());
byte[] keyTo =
to.To_8_bytes_array().Concat(long.MaxValue.To_8_bytes_array_BigEndian());
throw;
}
}
yte[] keyFrom =
b
customerId.To_8_bytes_array_BigEndian().ConcatMany(from.To_8_bytes_array(),
long.MinValue.To_8_bytes_array_BigEndian());
byte[] keyTo =
customerId.To_8_bytes_array_BigEndian().ConcatMany(to.To_8_bytes_array(),
long.MaxValue.To_8_bytes_array_BigEndian());
throw;
}
}
}
return ret;
}
/// <summary>
/// Deserializes protobuf object from byte[]. Non-generic style.
/// </summary>
/// <param name="data"></param>
/// <param name="T"></param>
/// <returns></returns>
public static object DeserializeProtobuf(byte[] data, Type T)
{
object ret = null;
using (System.IO.MemoryStream ms = new System.IO.MemoryStream(data))
{
return ret;
}
/// <summary>
/// Serialize object using protobuf serializer
/// </summary>
/// <param name="data"></param>
/// <returns></returns>
public static byte[] SerializeProtobuf(this object data)
{
byte[] bt = null;
using (System.IO.MemoryStream ms = new System.IO.MemoryStream())
{
ProtoBuf.Serializer.N onGeneric.Serialize(ms, data);
bt = ms.ToArray();
ms.Close();
}
return bt;
}
}
}
[20160329]
DBreeze.DataStructures.DataAsTree
Due to the desire of some people to implement into DBreeze an ability to store data as a
tree, with dependent nodes, out of the box, we have created new namespace
DBreeze.DataStructures. And inside there is a class DataAsTree.
using DBreeze;
using DBreeze.DataStructures;
//Initializing root node. Must be initialized after any new transaction (if DataAsTree must be
used there)
rootNode = new DataAsTree("testtree", tran);
//Inserting node with the content (it can be counted as file, thou any node can have Content)
var fileNode = new DataAsTree("file1");
fileNode.NodeContent = new byte[] { 1, 2, 3, 4, 5 };
//Adding it also to the root
rootNode.AddNode(fileNode);
//Committing transaction, so all our changes are saved now
tran.Commit();
}//eo using
//And recursively read all our inserted nodes starting from Root (any node can be used)
foreach (var tn in rootNode.ReadOutAllChildrenNodesFromCurrentRecursively(tran))
{
Console.WriteLine(tn.NodeName + "_" + tn.NodeId + "_" + tn.ParentNodeId);
byte[] cnt = tn.GetContent(tran);
if (cnt != null)
{
//Showing content of the file
}
}
}//eo using
Now, let’s grab nodes by specified name and rebind them to other parent change them:
rootNode.RemoveNode(tn);
tn.ParentNodeId = 1
;
rootNode.AddNode(tn);
}
tran.Commit();
}//eo using
tran.Commit();
}//eo using
[20160602]
DBreeze and external synchronizers, like ReaderWriterLockSlim
In different concurrent functions of the application several approaches may be mixed e.g:
F1(){
RWLS.ENTER_WRITE_LOCK
DBREEZE.TRAN.START
DBREEZE.SYNCTABLE("X")
DO
DBREEZE.TRAN.END
RWLS.EXIT_WRITE_LOCK
}
F2(){
DBREEZE.TRAN.START
DBREEZE.SYNCTABLE("X")
RWLS.ENTER_WRITE_LOCK
DO
RWLS.EXIT_WRITE_LOCK
//OR
RWLS.ENTER_READ_LOCK
DO
RWLS.EXIT_READ_LOCK
DBREEZE.TRAN.END
}
First simple rule to avoid that is not mix approaches in functions.
Having the fact that Dictionary is, a priori, must be faster than any persistent object and
access to it has to be designed as a super fast and concurrent,
there can be formulated a RULE to use as shorter RWLS (like in F2) as possible.
So, better, when RWLS always resides after SYNCTABLE
[20160628]
Integrated document text search functionality out of the box into DBreeze core.
Starting from version 75, DBreeze has implemented text search engine from DBreezeBased
project. Let’s assume, that we have following class:
class MyTask
{
public long Id { get; set; }
public string Description { get; set; } = "";
public string Notes { get; set; } = "";
}
We want to store it in DBreeze, but also we want to be able to find it by the text, represented
in Description and Notes.
//we want to store searchable text (text index) in table “TasksTextSearch” and MyTask itself
in table "Tasks"
tran.SynchronizeTables("Tasks", "TasksTextSearch");
//Storing task
tsk = new MyTask()
{
Id = 1,
Description = "Starting with the .NET Framework version 2.0, well if you derive a class
from Random and override the Sample method, the distribution provided by the derived class
implementation of the Sample method is not used in calls to the base class implementation of the
NextBytes method. Instead, the uniform",
Notes = "distribution returned by the base Random class is used. This behavior improves
the overall performance of the Random class. To modify this behavior to call the Sample method in
the derived class, you must also override the NextBytes method"
};
tran.Insert<long, byte[]>("Tasks", tsk.Id, null);
//Creating text, for the document search. any word or word part (minimum 3 chars, check
TextSearchStorageOptions) from Description and Notes will return us this document in the future
tran.TextInsert("TasksTextSearch", tsk.Id.To_8_bytes_array_BigEndian(), tsk.Description +
" " + tsk.Notes, "");
and
“contains” or “full-match”
Very important term that we have discovered is the way how we store searching text.
There is a possibility to store words which can be later searched by using “contains” logic
and by “full-match” logic. Words stored by “full-match” reside less area in the database file
(memory) and can be searched only by searching complete word. This is necessary for
multi-parameter search, which will be explained in later chapters.
Example:
Search engine, internally using StartsWith, will be able to find a match by words
wizard, wizar, wiza, wiz, izard, izar, iza, zar, zard, ard etc…
Deferred indexing
By default every insert into text will be with option DefferedIndexing = false
It means that search service is created within given transaction, while committing it.
It’s good for relatively small amount of search words, but as larger this amount is, as longer it
will take to commit transaction.
To stay with the fast commits, independent of the searchable-set size, use
DefferedIndexing = true option. It will run indexing in parallel thread.
In case of abnormal program termination, indexing will go on after restarting DBreeze
engine.
It’s possible to mix approaches for different searchable sets inside of one transaction, by
changing DefferedIndexing parameter for different tran.TextInsertToDocument.
Storage configuration.
Current quantity of words in one block is configured to 1000 and initial reserved
space for every block is 100.000 bytes.
Having that
Minimal size of the block is 100.000 bytes.
Maximal size of the block for 10.000 added documents is 1.250.000 bytes.
Expected size of the of the block for 10.000 added documents is 300,000 bytes
[20160718]
.NET Portable support
Get from release folder Portable version of DBreeze (or correspondent version from GitHub
Release):
https://github.com/hhblaze/DBreeze/releases/download/v1.075/DBreeze_01_075_20160705
_NETPortable.zip
Now we are able to describe any business logic, relying on DBreeze manipulation, right in
the portable (cross-platform) class and then to use the final library from any platform specific
project (UWP, Android, iOS etc.).
.NET Portable doesn’t have file operations implemented, that’s why FSFactory.cs class
(from NETPortable.zip folder) must be instantiated in a platform specific class and then, like
an implementing interface parameter, supplied to a portable DBreeze instance. Read more
in
!!!Readme.txt (from NETPortable.zip folder).
[20160921]
DBreezeEngine.BackgroundTasksExternalNotifier
At this moment we have one possible background task - TextIndexer. Probably in the future
we can have more of them.
Intro
It’s possible to store words, which may be later searched by “contains” logic and by
“full-match” logic all together.
“(boy | girl) & (with) & (red | black) & (umbrella | shoes)”
Such approach can help not only in search of the text, but also in the fast object search by
multiple parameters, avoiding full scans or building unnecessary relational indexes.
We can imagine objects with many properties from different comboboxes, checkboxes and
other fields which we want to search at once:
#PROF_1 means profession or skill with database index 1 stored in a separate table (let it
be programmer).
#PROF_2 means profession or skill with database index 2 (let it be unix administrator).
Here we are searching a candidate, who must be a man from Hamburg with the knowledge
of German language, extra both or any of Russian or English languages must be on
board, who is a programmer or a u
nix administrator.
Range queries
To store data for the range traversing we must use ordinary DBreeze indexes, but with
text-search subsystem we can make lots of tricks.
For example, we want to find a Honda car dealers in 300 km radius around Bremen.
Let’s assume that we have one in Hamburg - 200km away from Bremen. To save its location
to be searched via text-system, we split earth map on tiles with the area of 500 km2,
receiving a grid with numbered tiles (like T12545). It’s just a mathematical operation, by
supplying latitude and longitude of a point we can momentally get tile name where it resides.
Before car dealer is stored into database, its address must be geocoded and tile number
must be stored inside the text-search index together with the other meta information:
So, this car dealer sells Honda and Mitsubishi, resides somewhere in tile T15578.
By searching any Honda dealer in radius 300 km from Bremen, geocoding Bremen city
center coordinates and getting all tiles in radius 300km around this point (very fast operation,
getting all square names from top-left corner to bottom-right). Let’s assume that around this
Bremen point, in radius 300 km, there are four 500 km2 tiles (T14578 T14579 T15578
T15579).
Now we search
“(#PROD_HONDA) & (#TILE_T14578 | #TILE_T14579 | #TILE_T15578 | #TILE_T15579)”
Hamburg car dealer will be found. Distance for returned entities may be re-checked to get
100% precision.
W can save in the text-index global DateTime information, like year and month, to make
several types of combined search easier:
Finding documents for the customerID-124, from 2016 may - 2016 july, with the existent text
“monoblocks”:
Of course, everything depends upon the quantity of data residing under different indexes.
Sometimes it is better to traverse the range of CustomerID+DateTime DBreeze index,
getting all documents and checking that they contain the word “monoblock” inside.
Parameter changes
tran.TextInsert("TextSearch", ((long)157).To_8_bytes_array_BigEndian(),
"Alex Solovyov Hamburg 21029", "#JOB_1 #STATE_3");
tran.TextInsert("TextSearch", ((long)182).To_8_bytes_array_BigEndian(),
"Ivars Sudmalis Hamburg 21035", "#JOB_2 #STATE_2");
tran.Commit();
}
Note, new insert of the same external ID will work like an update.
Words “ "Ivars Hamburg 21035" ” “Alex Hamburg 21029” are stored using “contians” logic and later
can be searched using “contains” logic. So, both documents can be found by searching text
“mburg“.
Words “#JOB_1 #JOB_2 #STATE_2 #STATE_3” are stored using “full-match” logic and can be
found only by searching complete word.
E.g search by “ATE_” will not return these documents.
Programmer must take care that “contains” words are not being mixed with “full-match”
words to avoid “dirty” search results.
E.g. it’s better to disallow “contains” words like “whatever#JOB_1”, otherwise it will be mixed
with full-matched “#JOB_1”.
First, the search manager for the table (TSM) must be instantiated. It lives only inside of one
transaction. Via TSM it’s possible to receive logical blocks. Logical block is a set of space
separated words which must be searched. Minimal quantity is 1 word. First parameter is
“contains” words, second - “full-match” words. They can be mixed.
In our example:
(((mali & 035) & (many | less)) | (full-matched-word 21029) ) then exclude all documents where exists
full-matched word “test”
To achieve
“(boy | girl) & with & (red | black) & (umbrella | shoes)”,
where bolds - are full-matched, we could write:
Blocks can be reused in case if we need to make several independent checks with the same
set of search parameters:
.Or(new DBreeze.TextSearch.BlockAnd("2102"))
.And("","#LNDUA")
.And(new DBreeze.TextSearch.BlockAnd("","#LNDUA"))
pet”,false))
.And(tsm.Block("boy girl","
.And("","#LNDUA #LNDDE",false)
.Exclude(bl2)
.GetDocumentIDs())
{Console.WriteLine(w.To_Int64_BigEndian());}
TextGetDocumentsSearchables
- If word is stored to be used with “contains” logic it will be saved like this:
around
round
ound
und (up to minimal search length)
If word is stored to using “full-match” logic it will be saved only once
E.g. word “around” will be saved only once
around
- External document IDs, supplied while insert, will be transformed into monotonically
grown internal document ID. Matching between them will be saved, inserted text will
be also saved (it’s necessary for the “smart” update).
- When the word is stored into DBreeze table as a key, as a value we store byte[],
where each bit location corresponds to the internal document ID and bit value 1
means that this word is inside of this document. If word exists in documents with
internal IDs 1,2,3, 5,6,7 - WABI makes transformation into binary 11101110 (0xEE). It
means, if there are 1 000 000 (one million) documents and there is a word that was
found only in the latest document, we need 1000000/8 = 125000 bytes (125KB for its
bitmap index). The same size for bitmap index we need in case, if word exists in all
documents. If word was found only in the first document it will reside only 1 byte. If
there are 1mln documents and each of them has the same 1mln words in it, final
space must be around 125GB. But words are stored in blocks, WABIs are optimized
and compressed, so the real physical space will be much less. If there are 20000
unique words which are dispersed across 10000 documents, then must be around
7MB of space used, before optimization algorithms start to work.
- ith & (red | black) & (umbrella | shoes)” ,
Search performance. “(boy | girl) & w
where bolds are full-matches, - for them, one DBreeze Select per word has to be
made to get WABI before starting comparative analysis. For non-bold -
SelectForwardStartsWith is used, cursor may find more than one matching result.
But, for easy computation, - as many search words, as many internal selects has to
be made. Thereafter, received binary indexes have to be binary merged by binary
AND/OR logic.
Clustering
Insert into text-search table is accompanied by supplying index table name. Making new
index table, let’s say, for every 50000 documents, will give possibility to run search queries
in parallel for every 50000 documents block.
Received results have to be merged.
[20161214]
Mixing of multi-parameter and a range search.
For example there are tasks with descriptions. Each task has a creation date in UTC.
We would like to search all those tasks, which creation dates stay in a specified time range
and their descriptions contain some defined words.
We are not very powerful in limiting of ranges, using our text-search system only, and in
key-range-search system we are not very powerful in finding multi-parameters, without the
full-scan of the range.
Every new insert of the external-ID into text-search subsystem generates internal
monotonically grown ID. So, in case if we are sure that our external-IDs also grow up
(maybe not monotonically, but grow up) with every insert, we can build up a mixed-search
system.
Starting from version 1.81, it’s possible to make by supplying optional external-IDs to the
TextSearchTable object, limiting the search range. Also it’s possible to choose ordering of
the returned document IDs (ascending, descending).
Default choice is always descending - latest inserted documents will be returned first in
text-search system.
Getting all document-IDs which contain words “boy” and “shoes” and are limited by external
IDs from 3 - 17.
foreach (var w in
tsm.BlockAnd("boy shoes")
.GetDocumentIDs())
{
Console.WriteLine(w.To_Int64_BigEndian());
}
//Thinking descending
tsm.ExternalDocumentIdStart = ((long)17).To_8_bytes_array_BigEndian();
tsm.ExternalDocumentIdStop = ((long)3).To_8_bytes_array_BigEndian();
tsm.Descending = true;
In our example, we could create entity “task”, then create secondary index, building
combined index from “creation DateTime”+”task Id”, then insert description into
“TaskSearchTable”, supplying as external-ID the “task ID”.
When time to search by description and a time range comes, we could fetch the first and the
last task-IDs from the supplied time range using secondary index (“creation DateTime”+”task
Id”). And then to search “TaskSearchTable” by necessary filter words and by supplying start
and end task-IDs as ExternalDocumentIdStart-Stop limiting search range parameters.
[20170201]
Storing resources synchronized between memory and a disk.
Sometimes it’s necessary to have entities which must be stored inside of the in-memory
dictionary, for the fastest access, and, at the same time, synchronized with the disk.
Starting from DBreeze ver. 1.81 we have DBreezeEngineInstance.Resources, that is
available right after DBreeze engine instantiation. It can be called for resource manipulation
(Insert(Update)/Remove/Select) from any point of the program, inside or outside any
transaction.
DBEngine.Resources.Insert<MyResourceObjectType>("MyResourceName", new
MyResourceObjectType() { Property1 = "322223" });
DBEngine.Resources.Remove("MyResourceName");
rsr = DBEngine.Resources.Select<MyResourceObjectType>("MyResourceName");
There are several functions overloads letting us to work with the batches and extra technical
settings regulating either resources must be hold on-Disk or in-Memory, stored fast and with
the insert validation check.
[20170202]
DBreezeEngine.Resources.SelectStartsWith.
DBEngine.Resources.Insert<int>("t1", 1);
);
DBEngine.Resources.Insert<int>("t2", 2
DBEngine.Resources.Insert<int>("t3", 3 );
);
DBEngine.Resources.Insert<int>("b1", 1
DBEngine.Resources.Insert<int>("b2", 2 );
[20170306]
InsertDataBlockWithFixedAddress
This new function from ver. 1.84, returns always fixed address to the inserted data-block,
even if it changes location in the file (e.g. after updates).
t.Commit();
}
blref = t.InsertDataBlockWithFixedAddress<byte[]>("t1",
blref, new byte[10000]);
t.Insert<int, byte[]>("t1", 1, blref);
t.Commit();
}
//Also possible:
using (var t = eng.GetTransaction())
{
var row = t.Select<int, byte[]>("t1", 1);
var vall = row.GetDataBlockWithFixedAddress<byte[]>(0);
}
Let’s assume, that we have an Entity and we want to have 2 extra secondary indexes to
search this entity, we will use 1 table to store everything.
After byte[] {2} we will store primary key - entity ID, after byte[] {5} - first secondary index,
after byte[] {6} - second secondary index.
Value in all cases will be reference to the DataBlockWithFixedAddress.
using DBreeze.Utils;
//Storing in dictionary first, to stay with sorted batch insert - fast speed,
smaller file size
Dictionary<string, Tuple<byte[], byte[]>> df = new Dictionary<string,
Tuple<byte[], byte[]>>();
byte[] ik = null;
byte[] ref2v = null;
INSERT
//index grows
idx++;
}
}
//Insert itself
foreach (var el in df.OrderBy(r => r.Key))
{
t.Insert<byte[], byte[]>("t1", el.Value.Item1,
el.Value.Item2);
}
t.Commit();
}
Benchmarking:
Standard HDD, inserted 100K elements with 1 primary key and 2 secondary indexes.
Table file size 30MB, consumed 5 seconds (around 5 inserts per row were used).
Update:
For small amount of updates:
//Getting reference to the key 5 via primary index and update it. Of course, it’s possible to get
reference via secondary indexes also:
In case if we want to update a huge batch and we are not satisfied with the speed of
previous technique, we can follow such logic:
ik = new byte[] { 2
}.Concat(i.To_4_bytes_array_BigEndian());
df.Add(ik.ToBytesString(), new Tuple<byte[], byte[]>(ik,
ref2v));
//Updating
foreach (var el in df.OrderBy(r => r.Key))
{
t.Insert<byte[], byte[]>("t1", el.Value.Item1,
el.Value.Item2);
}
Benchmarking:
Standard HDD, update 100K elements with 1 primary key and 2 secondary indexes.
Table file size became 60MB from 30MB, consumed 8 seconds (around 5 inserts per row
were used).
Random/Sequential selects
ew Random();
Random rnd = n
for (int i = 0;
i < 10000; i++) {
int k = rnd.Next(99999);
Random select becomes several times faster in compare with the case when secondary
index needs to lookup value via primary key. Inserts are faster and file size is smaller in
compare with the technique, when we store entity with each secondary index separately.
Benchmarking:
Standard HDD, 10K random lookups takes 500ms; sequential lookup takes 120ms
New overloads:
[20170319]
RandomKeySorter
Starting from ver. 1.84, instance of the transaction contains instantiated class
RandomKeySorter. It can be very handy in case of batch insert of random keys or batch
update of random keys with the flag “Technical_SetTable_OverwriteIsNotAllowed()”.
Huge speed increase and space economy can be achieved in such scenarios. We have
earlier many times discussed that DBreeze is sharpened for the insert of sorted keys within
huge batch operations, so here there is a useful wrapper.
//Or remove
//t.RandomKeySorter.Remove<int>("t1", 1);
}
New overloads:
t.RandomKeySorter.Insert = t.InsertRandomKeySorter
t.RandomKeySorter.Remove = t.RemoveRandomKeySorter
[20170321]
DBreeze as an object database. Objects and Entities.
Starting from ver. 1.84, there is a new data storage concept available (only for new tables).
This approach will be interesting for the case when entity/object has more than 1 search key
(primary). System will automatically add and remove indexes within CRUD operations.
Many DBreeze optimization concepts like “technical_SetTableOVerwrite” and “sorting keys
in memory before insert” are already implemented inside of this software.
API explanation
Let’s define the custom serializer for DBreeze (in this example let’s take NetJSON from
NuGet)
using DBreeze;
using DBreeze.Utils;
using DBreeze.Objects;
eturn
DBreeze.Utils.CustomSerializator.ByteArraySerializator = (object o) => { r
NetJSON.NetJSON.Serialize(o).To_UTF8Bytes(); };
Insert
Birthday = initBirthday.AddYears(rnd.Next(40)).AddDays(i),
Name = $"Mr.{i}",
Salary = 12000
};
NewEntity = true,
//Changes Select-Insert pattern to Insert (speeds up insert process)
t.Commit()
}
There are 3 new functions available via Transaction instance, they all start from the word
“Object”:
ObjectGetNewIdentity
ObjectInsert
ObjectRemove
And one new function available via DBreeze.DataTypes.Row.ObjectGet.
There can be 255 indexes per entity (value 0 is reserved). Indexes numbers must be
specified within insert or select operations.
Entity itself is saved only once, then each index becomes reference to it.
This concept reduces space and speeds up updates.
All indexes are stored in one table starting from the byte defining this index (1,2,3 etc…).
Under byte 0 is saved identity counter, that brings the function ObjectGetNewIdentity.
Last parameter of the function ObjectInsert must be set to true if the batch CRUD operation
must be speeded up. Note, that it can reside more physical space.
Parameter DBreezeObject.NewEntity is set by the programmer. It helps the system to skip
Select operation before Insert and can increase the insert speed of new entities and may be
set for new entities only.
ToIndex is a function helping to create byte[]. In this case it will create byte[] from
(byte)1 and (long)5. Where (byte)1 is identificator of the index and 5 is primary key.
Using DBreeze.Utils;
1.ToIndex((long)5)
= ((byte)1).To_1_byte_array().Concat(((long)5).To_8_bytes_array_BigEndian())
In the next example: (byte)2 + DateTime + (long) - that’s how birthday index will be stored
There is also another usefulf function for fast byte[] keys crafting:
DBreeze.Utils.ToBytes
Using DBreeze.Utils;
Console.WriteLine(127.ToBytes((long)45).ToBytesString());
.ToBytes vs .ToIndex
ToBytes differs from ToIndex by the way of first element casting. ToIndex tries to cast
integer as byte and ToBytes casts it as integer:
Getting range of objects via secondary index stored under key 2 (birthdays). Note, that within
insert, by default, primary key will be added to the end of secondary key, so our search
through birthdays will look like this:
In case if we want to use StartFrom ( having no idea about the end) we use dataType
max/min values: getting all persons born starting from 17 November 2007
Because all indexes are stored in one table, we have to use range limitations (read about
DBreeze indexes) and Forward/Backward FromTo becomes this concept favorite
function for the range selects.
Update
//we could supply all available indexes here, like in first insert - it’s
typical behaviour - (they will not be overwritten if they are not changed). But
it’s not necessary if we are sure that we don’t want to change them
//Adding new indexed parameter (only for this entity, not for all)
new DBreezeIndex(3,p.Salary), //SI - Secondary Index
},
//NewEntity = true,
Entity = p
}, false);
But very often it’s necessary to get data from database first, change it and then save back.
Second possibility of update:
//Updating entity
ex.Entity.Name = "Superman";
//Saving entity
var ir = t.ObjectInsert<Person>("t1", ex,
true); //With e.g. high-speed
Remove entity
t.Commit();
It’s possible to create many counters which will be automatically stored in the table.
The default counter is stored under key address byte[] {0}. For other counters must be set
address and, if it’s needed, the seed.
Here ExtraIdentity will be stored under key address new byte[] {255,1}. So this place is
reserved for this counter with the seed 4 (4, 8, 12, 16 etc...).
Creating another one with the seed 7 (7, 14, 21, 28...):
ExtraIdentity = t.ObjectGetNewIdentity<ulong>("t1", new byte[] { 255, 2 },7)
In this case “t1” can not use index 255 for inserting objects, because it’s already busy
by user’s key generation.
[20170327]
How DBreeze index works.
using DBreeze;
using DBreeze.Utils;
t.Commit();
}
using (var t = engine.GetTransaction())
{
foreach (var r in t.SelectForward<string, byte[]>("t1"))
{
Console.WriteLine(r.Key);
}
/*
a
aa
aaa
aab
aac
aad
b
bb
bba
bbb
bbc
bbd
c
cc
cca
ccb
ccc
ccd
*/
/*
aab
aac
aad
*/
t.Commit();
}
t.Commit();
}
/*
01 08D4902502B9C000 800000000000007D -> 1 2017.05.01 125
01 08D4902502B9C000 800000000000007E -> 1 2017.05.01 126
01 08D4902502B9C000 800000000000007F -> 1 2017.05.01 127
*/
Optimizing code using ToIndex or ToBytes functions (putting inside the previous example
insert)
/*
01 08D4902502B9C000 800000000000007D -> 1 2017.05.01 125
01 08D4902502B9C000 800000000000007E -> 1 2017.05.01 126
01 08D4902502B9C000 800000000000007F -> 1 2017.05.01 127
02 08D4902502B9C000 8000000000000071 -> 2 2017.05.01 113
02 08D4902502B9C000 8000000000000073 -> 2 2017.05.01 115
02 08D4902502B9C000 8000000000000075 -> 2 2017.05.01 117
*/
key = 1.ToIndex((long)1
);
t.Insert<byte[], byte[]>("t1", key, null);
key = 1.ToIndex((long)2
);
t.Insert<byte[], byte[]>("t1", key, null);
key = 1.ToIndex((long)3
);
t.Insert<byte[], byte[]>("t1", key, null);
key = 1.ToIndex((long)4
);
t.Insert<byte[], byte[]>("t1", key, null);
/*
Under index Nr 2 is stored entity creation date, so we could search over it
also.
We have to add entity primary key to the end of this index to avoid key
overwriting in case, when different entities have the same creation date.
*/
t.Commit();
}
//Showing table content after insert. All indexes are stored together.
using (var t = engine.GetTransaction())
{
foreach (var r in t.SelectForward<byte[], byte[]>("t1"))
{
Console.WriteLine(
r.Key.Substring(0, 1).ToBytesString() + " " +
r.Key.Substring(1) .ToBytesString()
);
}
}
/*
01 8000000000000001
01 8000000000000002
01 8000000000000003
01 8000000000000004
02 08D4902502B9C0008000000000000001
02 08D4D87040BAC0008000000000000002
02 08D4D87040BAC0008000000000000003
02 08D5085F5BED80008000000000000004
*/
//Showing entities which were created in the range (2017, 8, 1) - (2017, 11, 1)
using (var t = engine.GetTransaction())
{
foreach (var r in t.SelectForwardFromTo<byte[], byte[]>
("t1",
2.ToIndex(new DateTime(2017, 8, 1)),true,
2.ToIndex(new DateTime(2017, 11, 1)), true
))
{
Console.WriteLine(
r.Key.Substring(0, 1).ToBytesString() + " " +
r.Key.Substring(1) .ToBytesString() +
" -> " +
" entityCreatedOn: " + r.Key.Substring(1,
8).To_DateTime().ToString("yyyy/MM/dd") + " " +
" entityId: " + r.Key.Substring(9, 8).To_Int64_BigEndian()
);
}
}
/*
02 08D4D87040BAC0008000000000000002 -> entityCreatedOn: 2017.08.01 entityId:
2
02 08D4D87040BAC0008000000000000003 -> entityCreatedOn: 2017.08.01 entityId:
3
02 08D5085F5BED80008000000000000004 -> entityCreatedOn: 2017.10.01 entityId:
4
*/
If something is not working like it is expected, please, don’t hesitate to write down an issue
comment on http://dbreeze.tiesky.com