Posts tagged with '.NET'

This is a repost that originally appeared on the Couchbase Blog: Moving from SQL Server to Couchbase Part 3: App Migration.

In this series of blog posts, I’m going to lay out the considerations when moving to a document database when you have a relational background. Specifically, Microsoft SQL Server as compared to Couchbase Server.

In three parts, I’m going to cover:

The goal is to lay down some general guidelines that you can apply to your application planning and design.

If you would like to follow along, I’ve created an application that demonstrates Couchbase and SQL Server side-by-side. Get the source code from GitHub, and make sure to download a developer preview of Couchbase Server.

Migrate vs Rewrite

If you’re building a new web app, then Couchbase Server is a good choice to use as your system of record. Flexible data modeling, fast data access, ease of scaling all make it a good choice.

Couchbase Server can supplement SQL Server in your existing web application. It can be a session store or a cache store. You don’t have to replace your RDMBS to benefit from Couchbase Server. You can use it as your system of engagment.

However, if you’re considering making a document database your "system of record" for an existing app, then you need to plan what to do about that application (assuming you’ve already come up with a data modeling and data migration plan as covered in the earlier parts of this blog series). There are really two options:

  • Replace your data/service layer. If you’ve built your app in a way that decouples it from the underlying persistence, that’s going to benefit you tremendously when switching from SQL Server to Couchbase. If you are using an SOA, for instance, then you might not have to make very many changes to the web application.

  • Rebuild your application. If you don’t have a decoupled architecture, then you’ll likely have to bite the bullet and rewrite/refactor large portions of your application. This can be a significant cost that you’ll have to factor in when deciding whether or not to switch to a document database. I wish I could say it would be easier, that there was some magic potion you could use. But remember, even if the cost of a rebuild is too great, you can still use Couchbase Server in tandem with SQL Server.

The pieces of your stack that you might need to rebuild or replace include:

  • ADO.NET - If you are using plain ADO.NET or a micro-OR/M like Dapper, these can be replaced by the Couchbase .NET SDK.

  • OR/M - Entity framework, NHibernate, Linq2SQL, etc. These can be replaced by Linq2Couchbase

  • Any code that uses those directly - Any code that touches ADO.NET, OR/Ms, or other SQL Server code will need to be replaced to use Couchbase (and/or rewritten to introduce another layer of abstraction).

The rest of this blog post will be tips and guidelines that apply for rewriting, refactoring, or starting a new project.

What’s going to be covered

Document databases force business logic out of the database to a larger extent than relational databases. As nice as it would be if Couchbase Server had every feature under the sun, there are always tradeoffs.

In this blog post, we will cover the changes to application coding that come with using Couchbase. At a high level, here is what will be covered in this blog post. On the left, a SQL Server feature; on the right, the closest equivalent when using Couchbase Server.

SQL ServerCouchbase Server

tSQL

N1QL

Stored Procedures

Service tier

Triggers

Service tier

Views

Map/Reduce Views

Autonumber

Counter

OR/M (Object/relational mapper)

ODM (Object data model)

ACID

Single-document ACID

In addition, we’ll also be covering these important topics:

  • Serialization

  • Security

  • Concurrency

  • SSIS, SSRS, SSAS

Using N1QL

The N1QL (pronounced "nickel") query language is one of my favorite features of Couchbase Server. You are already comfortable with the SQL query language. With N1QL, you can apply your expertise to a document database.

Here are a few examples to show the similarities between N1QL and tSQL:

tSQLN1QL

DELETE FROM [table] WHERE val1 = 'foo'

DELETE FROM `bucket` WHERE val1 = 'foo'

SELECT * FROM [table]

SELECT * from `bucket`

SELECT t1.* , t2.* FROM [table1] t1 JOIN [table2] t2 ON t1.id = t2.id

SELECT b1.* , b2.* FROM `bucket` b1 JOIN `bucket` b2 ON KEYS b1.mykeys

INSERT INTO [table] (key, col1, col2) VALUES (1, 'val1','val2')

INSERT INTO `bucket` (KEY, VALUE) VALUES ('1', {"col1":"val1", "col2":"val2"})

UPDATE [table] SET val1 = 'newvalue' WHERE val1 = 'foo'

UPDATE `bucket` SET val1 ='newvalue' WHERE val1 = 'foo'

Thanks to N1QL, migrating your SQL queries should be less difficult than other document databases. Your data model will be different, and not every function in tSQL is (yet) available in N1QL. But for the most part, your existing tSQL-writing expertise can be applied.

Back to the shopping cart, here’s an example of a simple tSQL query that would get shopping cart information for a given user:

SELECT c.Id, c.DateCreated, c.[User], i.Price, i.Quantity
FROM ShoppingCart c
INNER JOIN ShoppingCartItems i ON i.ShoppingCartID = c.Id
WHERE c.[User] = 'mschuster'

In Couchbase, a shopping cart could be modeled as a single document, so a roughly equivalent query would be:

SELECT META(c).id, c.dateCreated, c.items, c.`user`
FROM `sqltocb` c
WHERE c.type = 'ShoppingCart'
AND c.`user` = 'mschuster';

Notice that while N1QL has JOIN functionality, no JOIN is necessary in this shopping cart query. All the shopping cart data is in a single document, instead of being spread out amongst multiple tables and rows.

The results aren’t exactly the same: the N1QL query returns a more hierarchical result. But the query could be modified with an UNNEST to flatten out the results if necessary. UNNEST is an intra-document join, which is a feature that’s necessary when writing SQL for JSON.

In many document databases other than Couchbase, you would likely have to learn an API for creating queries, and you would not be able to apply your tSQL experience to help ramp up. I’m not saying that translation is always going to be a walk in the park, but it’s going to be relatively easy compared to the alternatives. If you’re starting a new project, then you’ll be happy to know that your SQL-writing skills will continue to be put to good use!

When writing C# to interact with N1QL, there are a couple key concepts that are important to know.

Scan Consistency. When executing a N1QL query, there are several scan consistency options. Scan consistency defines how your N1QL query should behave towards indexing. The default behavior is "Not Bounded", and it provides the best performance. At the other end of the spectrum is "RequestPlus", and it provides the best consistency. There is also "AtPlus", which is a good middle-ground, but takes a little more work. I blogged about Scan Consistency back in June, and it’s worth reviewing before you start writing N1QL in .NET.

Parameterization. If you are creating N1QL queries, it’s important to use parameterization to avoid SQL injection. There are two options with N1QL: positional (numbered) parameters and named parameters.

Here’s an example of both Scan Consistency and Parameterization used in C#:

var query = QueryRequest.Create(n1ql);
query.ScanConsistency(ScanConsistency.RequestPlus);
query.AddNamedParameter("userId", id);
var result = _bucket.Query<Update>(query);

I’m not going to dive into the N1QL query language any more than this, because it is such a deep topic. But you can check out the basics of N1QL and get started with the interactive N1QL tutorial.

SQL Stored Procedures

There is no equivalent of stored procedures (sprocs) in Couchbase. If you don’t already have a service tier, and you are using sprocs to share some logic across domains, I recommend that you create a service tier and move the logic there.

In fact, I wasn’t sure whether sprocs belonged in part 2 or part 3 of this blog series. Typical tiers in an enterprise application:

  • Web tier (UI - Angular/React/Traditional ASP.NET MVC)

  • Service tier (ASP.NET WebApi)

  • Database

Sprocs live in the database, but they contain logic. The service tier also contains business logic. So do they count as data or functionality?

I took a Twitter straw poll to decide.

Twitter straw poll on Stored Procedures

But my recommendation is that if you already have a service tier, move the sproc logic into that. If you don’t have a service tier, create one. This will live between the database and the UI.

In the source code for this series, I’ve created a single stored procedure.

CREATE PROCEDURE SP_SEARCH_SHOPPING_CART_BY_NAME
	@searchString NVARCHAR(50)
AS
BEGIN
	SELECT c.Id, c.[User], c.DateCreated
	FROM ShoppingCart c
	WHERE c.[User] LIKE '%' + @searchString + '%'
END
GO

This sproc can be executed from Entity Framework as follows:

public List<ShoppingCart> SearchForCartsByUserName(string searchString)
{
    var cmd = _context.Database.Connection.CreateCommand();
    cmd.CommandText = "SP_SEARCH_SHOPPING_CART_BY_NAME @searchString";
    cmd.Parameters.Add(new SqlParameter("@searchString", searchString));
    _context.Database.Connection.Open();
    var reader = cmd.ExecuteReader();

    var carts = ((IObjectContextAdapter) _context)
        .ObjectContext
        .Translate<ShoppingCart>(reader, "ShoppingCarts", MergeOption.AppendOnly);

    var result = carts.ToList();
    _context.Database.Connection.Close();
    return result;
}

By the way, that Entity Framework sproc code is ugly. Maybe I did it wrong? I’m not trying to slander EF, but generally, I haven’t used OR/Ms and sprocs together much in my career. Certainly a piece of ADO.NET or Dapper code would be shorter and cleaner.

This is a very simple sproc, but it introduces a basic search functionality. The benefits to such a sproc:

  • Reuse: The same sproc can be reused between different applications

  • Abstraction: The sproc implementation can be changed or improved. In this case, a basic LIKE could be switched out for a more robust full text search.

Any approach taken with introducing a service tier should provide the same benefits. I created an ASP.NET WebApi endpoint to take the place of the sproc.

[HttpGet]
[Route("api/searchByName/{searchString}")]
public IHttpActionResult SearchByName(string searchString)
{
    var n1ql = @"SELECT META(c).id, c.`user`
        FROM `sqltocb` c
        WHERE c.type = 'ShoppingCart'
        AND c.`user` LIKE $searchString";
    var query = QueryRequest.Create(n1ql);
    query.AddNamedParameter("searchString", "%" + searchString + "%");
    query.ScanConsistency(ScanConsistency.RequestPlus);
    var results = _bucket.Query<ShoppingCart>(query).Rows;

    return Json(results);
}

Note: for the sake of simplicity in the sample code, this endpoint actually lives in the same web project, but in production, it should be moved to its own project and deployed separately.

This endpoint holds a N1QL query that is similar in nature to the above sproc. Let’s see if it holds up to the same benefits:

  • Reuse? Yes. This endpoint can be deployed to its own server and be reused from other applications.

  • Abstraction? Again, yes. The implementation uses the naive LIKE approach, which we could improve by switching it to use Couchbase’s Full Text Search features without changing the API.

Instead of calling a sproc through Entity Framework, this endpoint would be called via HTTP. Here’s an example that uses the RestSharp library:

public List<ShoppingCart> SearchForCartsByUserName(string searchString)
{
    // typically there would be authentication/authorization with a REST call like this
    var client = new RestClient(_baseUrl);
    var request = new RestRequest("/api/searchByName/" + searchString);
    request.AddHeader("Accept", "application/json");
    var response = client.Execute<List<ShoppingCart>>(request);
    return response.Data;
}

If you are building a new project, I recommend that you create a service tier with the expectation of it being used across your enterprise. This allows you to have the same "shared code" that sprocs would normally provide without putting that code into the database.

This is also true for SQL Server functions, user defined types, rules, user-defined CLR objects.

Note: the above sproc example is a SELECT just to keep the example simple. In this case, you could potentially create a MapReduce View instead (which is discussed below). A MapReduce view cannot mutate documents though, so a service tier approach is a better general solution to replacing sprocs.

SQL Triggers

If sprocs weren’t already controversial enough, just bring up triggers in a conversation. As with stored procedures, I generally recommend that you move the trigger logic into the service tier, away from the database. If your software project depends on a lot of triggers, or complex triggers, or a lot of complex triggers, then you might want to wait for another project to try using Couchbase Server in.

That being said, there is some cutting-edge stuff that is being worked on that might be roughly equivalent to triggers. If you are interested in this, please contact me, and also stay tuned to the Couchbase Blog for the latest information.

Views

In SQL Server, Views are a way to store complex queries on the server, as well as provide some performance benefits. In Couchbase, Map/reduce views have been providing similar functionality for some time. For the most part, the functionality provided by views can be provided in a more expressive way with N1QL. However, views are not going away, and there are sometimes benefits to using them.

Map/reduce views can be defined and stored on the Couchbase cluster. To create them, you define a "map" function (with JavaScript) and optionally a "reduce" function (also in JavaScript).

In the Couchbase Console UI, go to Indexes → Views → Create View. Create a design document, and create a view within that design document.

Editing a Map/Reduce view in Couchbase
Figure 1. Screenshot of the Map/Reduce view editor in the latest Couchbase Console

At the center is the Map/Reduce code that you are working on. A sample document and its meta-data is also shown to give you some visual help, and at the bottom you have some options for executing your view.

For complete details on how views work, check out the MapReduce Views documentation.

As a quick example, I want to create a view that lists only the people who have an age greater than 21.

function (doc, meta) {
  if(doc.age > 21) {
  	emit(meta.id, doc.name);
  }
}

This view would emit the key of the document and the value of the "name" field. If my bucket contained the following documents:

foo1	{"age":17,"name":"Carmella Albert"}
foo2	{"age":25,"name":"Lara Salinas"}
foo3	{"age":35,"name":"Teresa Johns"}

Then the results of the view would look like:

KeyValue

"foo2"

"Lara Salinas"

"foo3"

"Teresa Johns"

The results of these views are updated automatically on an interval, and are also updated incrementally as documents are mutated. This means that, by default, the results of the views are eventually consistent with the actual documents. As a developer, you can specify the level of consistency (or staleness) you want. This will have an impact on performance.

Map/reduce views can be very helpful when you have very complex logic that’s easier to write in JavaScript than it is to write in N1QL. There can also be performance benefits when you are working with a write-heavy system.

Views can be accessed from .NET using ViewQuery.

var query = new ViewQuery().From("viewdesigndocument", "viewname").Limit(10);
var people = bucket.Query<dynamic>(query);
foreach (var person in people.Rows)
    Console.WriteLine(landmark.Key);

Alternatively, you could create N1QL queries instead of using Views. In many cases, N1QL will be easier to write, and the performance difference will be negligible. Unlike Views, the N1QL queries would live in the service tier. There is currently no way to store a "N1QL View" on the Couchbase Server cluster.

Serialization/deserialization

Whether you’re using N1QL, Views, or key/value operations, it’s important to consider how JSON is serialized and deserialized.

The .NET SDK uses Newtonson JSON.NET. If you are familiar with that tool (and who among .NET developers isn’t), then remember that you can use the same attributes (like JsonProperty, JsonConverter, etc). In some edge cases, it might be useful to create your own custom serializer, which is possible with the Couchbase .NET SDK. Check out the documentation on serialization and non-JSON documents for more information.

Security

Couchbase has role-based access control (RBAC) for administrators.

Couchbase can integrate with LDAP to manage Couchbase administrators and assign roles to users. Couchbase can also create read-only users internally.

There are some more robust changes and improvements coming to the Couchbase RBAC system, so stay tuned. In fact, I would recommend that you start checking out the monthly developer builds, as I expect to see some interesting improvements and features in this area soon!

Concurrency

Concurrency is something that you often have to deal with, especially in a web application. Multiple users could be taking actions that result in the same document being changed at the same time.

SQL Server uses pessimistic locking by default. This means that SQL Server expects rows to be in contention, and so it acts defensively. This is a sensible default for relational databases because denormalized data is spread across multiple tables and multiple rows. SQL Server does have the ability to use optimistic locking as well, through a variety of transaction levels.

Couchbase also offers two options to deal with concurrency: optimistic and pessimistic.

Optimisitic. This is called "optimistic" because it works best when it’s unlikely that a document will be in contention very often. You are making an optimistic assumption. On Couchbase, this is done with CAS (Compare And Swap). When you retrieve a document, it comes with meta data, including a CAS value (which is just a number). When you go to update that document, you can supply the CAS value. If the values match, then your optimism paid off, and the changes are saved. If they don’t match, then the operation fails, and you’ll have to handle it (a merge, an error message, etc). If you don’t supply a CAS value, then the changes will be saved no matter what.

Pessimistic. This is called "pessimistic" because it works best when you know a document is going to be mutated a lot. You are making a pessimistic assumption, and are forcibly locking the document. If you use GetAndLock in the .NET SDK, the document will be locked, which means it can’t be modified. Documents are locked for a maximum of 15 seconds. You can set a lower value. You can also explicitly unlock a document, but you must keep track of the CAS value to do so.

For more detail, check out the documentation on Concurrent Document Mutations.

Autonumber

Couchbase Server does not currently offer any sort of automatic key generation or sequential key numbering.

However, you can use the Counter feature to do something similar. The idea is that a document is set aside as a special counter document. This document can be incremented as an atomic operation, and the number can be used as a partial or whole key of the new document being created.

Ratnopam Chakrabarti, a developer for Ericsson, recently wrote a guest blog post about how to create sequentially numbered keys with Couchbase Server. His example is in Java, but it easy enough to follow, so I won’t repeat his example here.

OR/Ms and ODMs

If you are using SQL Server, you might be familiar with OR/Ms (Object-relational mappers). Entity Framework, NHibernate, Linq2SQL, and many others are OR/Ms. OR/Ms attempt to bridge the gap between structured data in C# and normalized data in relational databases. They also typically provide other capabilities like Linq providers, unit of work, etc. I believe that OR/Ms follow the 80/20 rule. They can be very helpful 80% of the time, and a pain in the neck the other 20%.

For document databases, there is a much lower impedence mismatch, since C# objects can be serialized/deserialized to JSON, and don’t have to be broken up into a normalized set of tables.

However, the other functionality that OR/Ms provide can still be helpful in document databases. The equivalent tool is called an ODM (Object Document Model). These tools help you define a set of classes to map to documents. Ottoman and Linq2Couchbase are popular ODMs for Couchbase, for Node and .NET respectively.

Linq2Couchbase has a Linq provider. It’s not an officially supported project (yet), but it is one of the most complete Linq providers I’ve ever used, and is used in production by Couchbase customers.

Below is an example from the Linq2Couchbase documentation that should look somewhat familiar for users of Entity Framework and NHibernate.Linq:

var context = new BucketContext(ClusterHelper.GetBucket("travel-sample"));
var query = (from a in context.Query<AirLine>()
             where a.Country == "United Kingdom"
             select a).
             Take(10);

I also used Linq2Couchbase in the sample code for this blog series. Here’s an example for Shopping Carts:

var query = from c in _context.Query<ShoppingCart>()
    where c.Type == "ShoppingCart"  // could use DocumentFilter attribute instead of this Where
    orderby c.DateCreated descending
    select new {Cart = c, Id = N1QlFunctions.Meta(c).Id};
var results = query.ScanConsistency(ScanConsistency.RequestPlus)
    .Take(10)
    .ToList();

Beyond being a great Linq provider, Linq2Couchbase also has an experimental change tracking feature. It’s definitely worth checking out. Brant Burnett is one of the key contributers to the project, and he’s also a Couchbase Expert. He presented a session at Couchbase Connect 2016 called LINQing to data: Easing the transition from SQL.

Transactions

I’ve already covered pessimistic and optimistic locking for transactions on a single document. Because of those, we can say that Couchbase supports ACID transactions on a per-document level. Couchbase does not, at this time, support ACID transactions among multiple documents.

Thinking back to the first blog post on data modeling, the need for multi-document transactions is often reduced or eliminated, compared to a relational model. A concept (like shopping cart) may require rows in multiple tables in a relational model, but a single document model in Couchbase.

If you are following a referential model, as in the social media example from the first blog post, you might be concerned about the lack of transactions. This highlights the importance of thinking about your use cases while creating your data model. If transactions are vital to your use case, the data model can often be structured to accomodate. We are happy to help you through this, just ask!

Multi-document transaction support may come in the future if enough Couchbase developers and customers ask for it or need it. So, if you go through the exercise of designing a document database data model, and transactions are still a vital part of your project, then Couchbase may not be the best "system of record" for at least part of your project. Couchbase may still be the best "system of engagement", able to help with scaling, caching, performance, and flexibility where needed.

As a side note, it may be worth checking out the NDescribe project, as it includes an SDK that works on top of the Couchbase SDK and provides a transaction system. (Note that this is not an officially supported tool).

SSIS, SSAS, SSRS

Not everyone uses SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), and SQL Server Reporting Services (SSRS), but these are powerful features that SQL Server has for integration, reporting, and analysis.

I can’t give you a blanket "use X instead of Y" for these, because it depends very much on your use case. I can point you in the direction of some of the tools available for Couchbase that revolve around data processing, data transformation, reporting, and analysis.

  • Kafka is an open source data streaming tool. Some of the popular use cases for Kafka include messaging, website activity tracking, metrics, and more.

  • Spark is a data procesessing engine, intended for large-scale data processing and ETL.

  • Hadoop is a big data framework for distributed storage and processing.

Couchbase has connectors that support each of these three popular tools.

Finally, Couchbase Analytics is currently in developer preview. It is intended as a data management engine that runs parallel to Couchbase Server. It’s a developer preview, and is not yet recommended to be used in production, but you can download Couchbase Analytics and Kafka, Spark, Hadoop extensions (click the Extensions tab) and try them out.

Summary

We’ve covered data modeling, data migration, and application migration through the lens of SQL Server. This is a good starting point for your next project, and will give you something to think about if you are considering migrating.

The Couchbase Developer Portal contains more details and information about every aspect of Couchbase Server.

I want to hear from you about what Couchbase can do to make your transition easier, whether you’re migrating or starting fresh. Did I miss something? Do you have a tool or system that you recommend? Have questions? Check out the Couchbase Forums, email me at matthew.groves@couchbase.com or find me on Twitter @mgroves.

This is a repost that originally appeared on the Couchbase Blog: Moving from SQL Server to Couchbase Part 2: Data Migration.
 
In this series of blog posts, I’m going to lay out the considerations when moving to a document database when you have a relational background. Specifically, Microsoft SQL Server as compared to Couchbase Server.
In three parts, I’m going to cover:
  • Data modeling
  • The data itself (this blog post)
  • Applications using the data
The goal is to lay down some general guidelines that you can apply to your application planning and design.
If you would like to follow along, I’ve created an application that demonstrates Couchbase and SQL Server side-by-side. Get the source code from GitHub, and make sure to download a developer preview of Couchbase Server.

Data Types in JSON vs SQL

Couchbase (and many other document databases) use JSON objects for data. JSON is a powerful, human readable format to store data. When comparing to data types in relational tables, there are some similarities, and there are some important differences.
All JSON data is made up of 6 types: string, number, boolean, array, object, and null. There are a lot of data types available in SQL Server. Let’s start with a table that is a kind of "literal" translation, and work from there.
SQL ServerJSON

nvarchar, varchar, text

string

int, float, decimal, double

number

bit

boolean

null

null

XML/hierarchyid fields

array / object

It’s important to understand how JSON works. I’ve listed some high-level differences between JSON data types and SQL Server data types. Assuming you already understand SQL data types, you might want to spend some time learning more about JSON and JSON data types.
A string in SQL Server is often defined by a length. nvarchar(50) or nvarchar(MAX) for instance. In JSON, you don’t need to define a length. Just use a string.
A number in SQL Server varies widely based on what you are using it for. The number type in JSON is flexible, in that it can store integers, decimal, or floating point. In specialized circumstances, like if you need a specific precision or you need to store very large numbers, you may want to store a number as a string instead.
A boolean in JSON is true/false. In SQL Server, it’s roughly equivalent: a bit that represents true/false.
In JSON, any value can be null. In SQL Server, you set this on a field-by-field basis. If a field in SQL Server is not set to "nullable", then it will be enforced. In a JSON document, there is no such enforcement.
JSON has no date data type. Often dates are stored as UNIX timestamps, but you could also use string representations or other formats for dates. The N1QL query language has a variety of date functions available, so if you want to use N1QL on dates, you can use those functions to plan your date storage accordingly.
In SQL Server, there is a geography data type. In Couchbase, the GeoJSON format is supported.
There are some other specialized data types in SQL Server, including hierarchyid, and xml. Typically, these would be unrolled in JSON objects and/or referenced by key (as explored in part 1 of this blog series on data modeling). You can still store XML/JSON within a string if you want, but if you do, then you can’t use the full power of N1QL on those fields.

Migrating and translating data

Depending on your organization and your team, you may have to bring in people from multiple roles to ensure a successful migration. If you have a DBA, that DBA will have to know how to run and manage Couchbase just as well as SQL Server. If you are DevOps, or have a DevOps team, it’s important to involve them early on, so that they are aware of what you’re doing and can help you coordinate your efforts. Moving to a document database does not mean that you no longer need DBAs or Ops or DevOps to be involved. These roles should also be involved when doing data modeling, if possible, so that they can provide input and understand what is going on.
After you’ve designed your model with part 1 on data modeling, you can start moving data over to Couchbase.
For a naive migration (1 row to 1 document), you can write a very simple program to loop through the tables, columns, and values of a relational database and spit out corresponding documents. A tool like Dapper would handle all the data type translations within C# and feed them into the Couchbase .NET SDK.
Completely flat data is relatively uncommon, however, so for more complex models, you will probably need to write code to migrate from the old relational model to the new document model.
Here are some things you want to keep in mind when writing migration code (of any kind, but especially relational-to-nonrelational):
  • Give yourself plenty of time in planning. While migrating, you may discover that you need to rethink your model. You will need to test and make adjustments, and it’s better to have extra time than make mistakes while hurrying. Migrating data is an iterative cycle: migrate a table, see if that works, adjust, and keep iterating. You may have to go through this cycle many times.
  • Test your migration using real data. Data can be full of surprises. You may think that NVARCHAR field only ever contains string representations of numbers, but maybe there are some abnormal rows that contain words. Use a copy of the real data to test and verify your migration.
  • Be prepared to run the migration multiple times. Have a plan to cleanup a failed migration and start over. This might be a simple DELETE FROM bucket in N1QL, or it could be a more nuanaced and targeted series of cleanups. If you plan from the start, this will be easier. Automate your migration, so this is less painful.
  • ETL or ELT? Extract-Transform-Load, or Extract-Load-Transform. When are you going to do a transform? When putting data into Couchbase, the flexibility of JSON allows you to transfer-in-place after loading if you choose.

An example ETL migration

I wrote a very simple migration console app using C#, Entity Framework, and the Couchbase .NET SDK. It migrates both the shopping cart and the social media examples from the previous blog post. The full source code is available on GitHub.
This app is going to do the transformation, so this is an ETL approach. This approach uses Entity Framework to map relational tables to C# classes, which are then inserted into documents. The data model for Couchbase can be better represented by C# classes than by relational tables (as demonstrated in the previous blog post), so this approach has lower friction.
I’m going to to use C# to write a migration program, but the automation is what’s important, not the specific tool. This is going to be essentially "throwaway" code after the migration is complete. My C# approach doesn’t do any sort of batching, and is probably not well-suited to extremely large amounts of data, so it might be a good idea to use a tool like Talend and/or an ELT approach for very large scale/Enterprise data.
I created a ShoppingCartMigrator class and a SocialMediaMigrator class. I’m only going to cover the shopping cart in this post. I pass it a Couchbase bucket and the Entity Framework context that I used in the last blog post. (You could instead pass an NHibernate session or a plain DbConnection here, depending on your preference).
public class ShoppingCartMigrator
{
    readonly IBucket _bucket;
    readonly SqlToCbContext _context;

    public ShoppingCartMigrator(IBucket bucket, SqlToCbContext context)
    {
        _bucket = bucket;
        _context = context;
    }
}
With those objects in place, I created a Go method to perform the migration, and a Cleanup method to delete any documents created in the migration, should I choose to.
For the Go method, I let Entity Framework do the hard work of the joins, and loop through every shopping cart.
public bool Go()
{
    var carts = _context.ShoppingCarts
        .Include(x => x.Items)
        .ToList();
    foreach (var cart in carts)
    {
        var cartDocument = new Document<dynamic>
        {
            Id = cart.Id.ToString(),
            Content = MapCart(cart)
        };
        var result = _bucket.Insert(cartDocument);
        if (!result.Success)
        {
            Console.WriteLine($"There was an error migrating Shopping Cart {cart.Id}");
            return false;
        }
        Console.WriteLine($"Successfully migrated Shopping Cart {cart.Id}");
    }
    return true;
}
I chose to abort the migration if there’s even one error. You may not want to do that. You may want to log to a file instead, and address all the records that cause errors at once.
For the cleanup, I elected to delete every document that has a type of "ShoppingCart".
public void Cleanup()
{
    Console.WriteLine("Delete all shopping carts...");
    var result = _bucket.Query<dynamic>("DELETE FROM `sqltocb` WHERE type='ShoppingCart';");
    if (!result.Success)
    {
        Console.WriteLine($"{result.Exception?.Message}");
        Console.WriteLine($"{result.Message}");
    }
}
This is the simplest approach. A more complex approach could involve putting a temporary "fingerprint" marker field onto certain documents, and then deleting documents with a certain fingerprint in the cleanup. (E.g. DELETE FROM sqltocb WHERE fingerprint = '999cfbc3-186e-4219-ab5d-18ad130a9dc6'). Or vice versa: fingerprint the problematic data for later analysis and delete the rest. Just make sure to cleanup these temporary fields when the migration is completed successfully.
When you try this out yourself, you may want to run the console application twice, just to see the cleanup in action. The second attempt will result in errors because it will be attempting to create documents with duplicate keys.

What about the other features of SQL Server?

Not everything in SQL Server has a direct counterpart in Couchbase. In some cases, it won’t ever have a counterpart. In some cases, there will be a rough equivalent. Some features will arrive in the future, as Couchbase is under fast-paced, active, open-source development, and new features are being added when appropriate.
Also keep in mind that document databases and NoSQL databases often force business logic out of the database to a larger extent than relational databases. As nice as it would be if Couchbase Server had every feature under the sun, there are always tradeoffs. Some are technical in nature, some are product design decisions. Tradeoffs could be made to add relational-style features, but at some point in that journey, Couchbase stops being a fast, scalable database and starts being "just another" relational database. There is certainly a lot of convergence in both relational and non-relational databases, and a lot of change happening every year.
With that in mind, stay tuned for the final blog post in the series. This will cover the changes to application coding that come with using Couchbase, including:
  • SQL/N1QL
  • Stored Procedures
  • Service tiers
  • Triggers
  • Views
  • Serialization
  • Security
  • Concurrency
  • Autonumber
  • OR/Ms and ODMs
  • Transactions

Summary

This blog post compared and contrasted the data features available in Couchbase Server with SQL Server. If you are currently using SQL Server and are considering adding a document database to your project or starting a new project, I am here to help.
Check out the Couchbase developer portal for more details.
Please contact me at matthew.groves@couchbase.com, ask a question on the Couchbase Forums, or ping me on Twitter @mgroves.

This is a repost that originally appeared on the Couchbase Blog: Moving from SQL Server to Couchbase Part 1: Data Modeling.

In this series of blog posts, I’m going to lay out the considerations when moving to a document database when you have a relational background. Specifically, Microsoft SQL Server as compared to Couchbase Server.

In three parts, I’m going to cover:

  • Data modeling (this blog post)

  • The data itself

  • Applications using the data

The goal is to lay down some general guidelines that you can apply to your application planning and design.

If you would like to follow along, I’ve created an application that demonstrates Couchbase and SQL Server side-by-side. Get the source code from GitHub, and make sure to download a developer preview of Couchbase Server.

Why would I do this?

Before we get started, I want to spend a little bit of time on motivation. There are 3 main reasons why one might consider using a document data store instead of (or in addition to) a relational database. Your motivation may be one or all three:

  • Speed: Couchbase Server uses a memory-first architecture which can provide a great speed boost as compared to a relational databases

  • Scalability: Couchbase Server is a distributed database, which allows you to scale out (and scale back in) capacity by just racking up commodity hardware. Built-in Couchbase features like auto-sharding, replication, load balancing make scaling a lot smoother and easier than relational databases.

  • Flexibility: Some data fits nicely in a relational model, but some data can benefit from the flexibility of using JSON. Unlike SQL Server, schema maintenance is no longer an issue. With JSON: the schema bends as you need it to.

For these reasons and others, Gannett switched from SQL Server to Couchbase Server. If you are considering this, definitely check out Gannett’s full presentation.

It should be noted that document databases and relational databases can be complimentary. Your application may be best served by one, the other, or a combination of both. In many cases, it simply is not possible to completely remove relational databases from your design, but a document database like Couchbase Server can still bring the above benefits to your software. The rest of this blog series will assume you have a SQL Server background and are either replacing, supplimenting, or starting a new greenfield project using Couchbase.

The ease or difficulty of transitioning an existing application varies widely based on a number of factors. In some cases it may be extremely easy; in some cases it will be time-consuming and difficult; in some (shrinking number of) cases it may not even be a good idea.

Understanding the differences

The first step is to understand how data is modeled in a document database. In a relational database, data is typically stored flat in a table and it is given structure with primary and foreign keys. As a simple example, let’s consider a relational database for a web site that has a shopping cart as well as social media features. (In this example, those features are unrelated to keep things simple).

Relational database example diagram

In a document database, data is stored as keys and values. A Couchbase bucket contains documents; each document has a unique key and a JSON value. There are no foreign keys (or, more accurately, there are no foreign key constraints).

Here’s a high-level comparison of SQL Server features/naming as compared to Couchbase:

Table 1. SQL Server compared to Couchbase
SQL ServerCouchbase Server

Server

Cluster

Database

Bucket

Row(s) from table(s)

Document

Column

JSON key/value

Primary Key

Document Key

These comparisons are a metaphorical starting point. Looking at that table, it might be tempting to take a simplistic approach. "I have 5 tables, therefore I’ll just create 5 different types of documents, with one document per row." This is the equivalent of literally translating a written language. The approach may work sometimes, but it doesn’t take into account the full power of a document database that uses JSON. Just as a literal translation of a written language doesn’t take into account cultural context, idioms, and historical context.

Because of the flexibility of JSON, the data in a document database can be structured more like a domain object in your application. Therefore you don’t have an impedence mismatch that is often addressed by OR/M tools like Entity Framework and NHibernate.

There are two main approaches you can use when modeling data in Couchbase that we will examine further:

  • Denormalization - Instead of splitting data between tables using foreign keys, group concepts together into a single document.

  • Referential - Concepts are given their own documents, but reference other documents using the document key.

Denormalization example

Let’s consider the "shopping cart" entity.

To represent this in a relational database would likely require two tables: a ShoppingCart table and a ShoppingCartItem table with a foreign key to a row in ShoppingCart.

When creating the model for a document database, the decision has to be made whether to continue modeling this as two separate entities (e.g. a Shopping Cart document and corresponding Shopping Cart Item documents) or whether to "denormalize" and combine a row from ShoppingCart and row(s) from ShoppingCartItem into a single document to represent a shopping cart.

In Couchbase, using a denormalization strategy, a shopping cart and the items in it would be represented by a single document.

{
  "user": "mgroves",
  "dateCreated": "2017-02-02T15:28:11.0208157-05:00",
  "items": [
    {
      "name": "BB-8 Sphero",
      "price": 80.18,
      "quantity": 1
    },
    {
      "name": "Shopkins Season 5",
      "price": 59.99,
      "quantity": 2
    }
  ],
  "type": "ShoppingCart"
}

Notice that the relationship between the items and the shopping cart is now implicit to being contained in the same document. No more need for an ID on the items to represent a relationship.

In C#, you would likely define ShoppingCart and Item classes to model this data:

public class ShoppingCart
{
    public Guid Id { get; set; }
    public string User { get; set; }
    public DateTime DateCreated { get; set; }
    public List<Item> Items { get; set; }
}

public class Item
{
    public Guid Id { get; set; }    // necessary for SQL Server, not for Couchbase
    public string Name { get; set; }
    public decimal Price { get; set; }
    public int Quantity { get; set; }
}

These classes would still make sense with Couchbase, so you can reuse them or design them this way. But with a relational database, this design does not match up in a direct way.

Hence the need for OR/Ms like NHibernate or Entity Framework. The way the above model can be mapped to a relational database is represented in Entity Framework* like this:

public class ShoppingCartMap : EntityTypeConfiguration<ShoppingCart>
{
    public ShoppingCartMap()
    {
        this.HasKey(m => m.Id);

        this.ToTable("ShoppingCart");
        this.Property(m => m.User);
        this.Property(m => m.DateCreated);
        this.HasMany(m => m.Items)
            .WithOptional()
            .HasForeignKey(m => m.ShoppingCartId);
    }
}

public class ShoppingCartItemMap : EntityTypeConfiguration<Item>
{
    public ShoppingCartItemMap()
    {
        this.HasKey(m => m.Id);

        this.ToTable("ShoppingCartItems");
        this.Property(m => m.Name);
        this.Property(m => m.Price);
        this.Property(m => m.Quantity);
    }
}

*Other OR/Ms will have similar mappings

Based on these mappings and an analysis of the use cases, I could decide that it would be modeled as a single document in Couchbase. ShoppingCartItemMap only exists so that the OR/M knows how to populate the Items property in ShoppingCart. Also, it’s unlikely that the application will be doing reads of the shopping cart without also needing to read the items.

In a later post, OR/Ms will be discussed further, but for now I can say that the ShoppingCartMap and ShoppingCartItemMap classes are not necessary when using Couchbase, and the Id field from Item isn’t necessary. In fact, the Couchbase .NET SDK can directly populate a ShoppingCart object without an OR/M in a single line of code:

public ShoppingCart GetCartById(Guid id)
{
    return _bucket.Get<ShoppingCart>(id.ToString()).Value;
}

This isn’t to say that using Couchbase will always result in shorter, easier to read code. But for certain use cases, it can definitely have an impact.

Referential example

It’s not always possible or optimal to denormalize relationships like the ShoppingCart example. In many cases, a document will need to reference another document. Depending on how your application expects to do reads and writes, you may want to keep your model in separate documents by using referencing.

Let’s look at an example where referencing might be the best approach. Suppose your application has some social media elements. Users can have friends, and users can post text updates.

One way to model this:

  • Users as individual documents

  • Updates as individual documents that reference a user

  • Friends as an array of keys within a user document

With two users, two updates, we would have 4 documents in Couchbase that look like this:

[
  // Key: "7fc5503f-2092-4bac-8c33-65ef5b388f4b"
  {
    "friends": [
      "c5f05561-9fbf-4ab0-b68f-e392267c0703"
    ],
    "name": "Matt Groves",
    "type": "User"
  },

  // Key: "c5f05561-9fbf-4ab0-b68f-e392267c0703"
  {
    "friends": [ ],
    "name": "Nic Raboy",
    "type": "User"
  },

  // Key: "5262cf62-eb10-4fdd-87ca-716321405663"
  {
    "body": "Nostrum eligendi aspernatur enim repellat culpa.",
    "postedDate": "2017-02-02T16:19:45.2792288-05:00",
    "type": "Update",
    "user": "7fc5503f-2092-4bac-8c33-65ef5b388f4b"
  },

  // Key: "8d710b83-a830-4267-991e-4654671eb14f"
  {
    "body": "Autem occaecati quam vel. In aspernatur dolorum.",
    "postedDate": "2017-02-02T16:19:48.7812386-05:00",
    "type": "Update",
    "user": "c5f05561-9fbf-4ab0-b68f-e392267c0703"
  }
]

I decided to model 'friends' as a one-way relationship (like Twitter) for this example, which is why Matt Groves has Nic Raboy as a friend but not vice-versa. (Don’t read too much into this, Nic :).

The way to model this in C# could be:

public class FriendbookUser
{
    public Guid Id { get; set; }
    public string Name { get; set; }
    public virtual List<FriendbookUser> Friends { get; set; }
}

public class Update
{
    public Guid Id { get; set; }
    public DateTime PostedDate { get; set; }
    public string Body { get; set; }

    public virtual FriendbookUser User { get; set; }
    public Guid UserId { get; set; }
}

The Update to FriendbookUser relationship can be modeled as either a Guid or as another FriendbookUser object. This is an implementation detail. You might prefer one, the other, or both, depending on your application needs and/or how your OR/M works. In either case, the underlying model is the same.

Here’s the mapping I used for these classes in Entity Framework. Your mileage may vary, depending on how you use EF or other OR/M tools. Focus on the underlying model and not the details of the OR/M mapping tool.

public class UpdateMap : EntityTypeConfiguration<Update>
{
    public UpdateMap()
    {
        this.HasKey(m => m.Id);

        this.ToTable("FriendBookUpdates");
        this.Property(m => m.Body);
        this.Property(m => m.PostedDate);
        this.HasRequired(m => m.User)
            .WithMany()
            .HasForeignKey(m => m.UserId);
    }
}

public class FriendbookUserMap : EntityTypeConfiguration<FriendbookUser>
{
    public FriendbookUserMap()
    {
        this.HasKey(m => m.Id);

        this.ToTable("FriendBookUsers");
        this.Property(m => m.Name);
        this.HasMany(t => t.Friends)
            .WithMany()
            .Map(m =>
            {
                m.MapLeftKey("UserId");
                m.MapRightKey("FriendUserId");
                m.ToTable("FriendBookUsersFriends");
            });
    }
}

If, instead of storing these entities as separate documents, we applied the same denormalization as the shopping cart example and attempted to store a user and updates in one document, we would end up with some problems.

  • Duplication of friends: each user would store the details for their friends. This is not tenable, because now a user’s information would be stored in multiple places instead of having a single source of truth (unlike the shopping cart, where having the same item in more than one shopping cart probably doesn’t make any domain sense). This might be okay when using Couchbase as a cache, but not as a primary data store.

  • Size of updates: Over a period of regular use, an individual user could post hundreds or thousands of updates. This could lead to a very large document which could slow down I/O operations. This can be mitigated with Couchbase’s sub-document API, but also note that Couchbase has a ceiling of 20mb per document.

Note: There’s an N+1 problem here too (friends of friends, etc), but I’m not going to spend time on addressing that. It’s a problem that’s not unique to either database.

Additionally, it may not be the case that when the application reads or writes a user that it will need to read or write friends & updates. And, when writing an update, it’s not likely that the application will need to update a user. Since these entities may often be read/written on their own, that indicates that they need to be modeled as separate documents.

Note the array in the Friends field in the user document and the value in the User field in the update document. These values can be used to retrieve the associated documents. Later in this post, I’ll discuss how to do it with key/value operations and how to do it with N1QL.

To sum up, there are two ways to model data in a document database. The shopping cart example used nested objects, while the social media example used separate documents. In those examples, it was relatively straightforward to choose. When you’re making your own modeling decisions, here’s a handy cheat sheet:

Table 2. Modeling Data Cheat Sheet
If …​Then consider…​

Relationship is 1-to-1 or 1-to-many

Nested objects

Relationship is many-to-1 or many-to-many

Separate documents

Data reads are mostly parent fields

Separate document

Data reads are mostly parent + child fields

Nested objects

Data reads are mostly parent or child (not both)

Separate documents

Data writes are mostly parent and child (both)

Nested objects

Key/value operations

To get document(s) in Couchbase, the simplest and fastest way is to ask for them by key. Once you have one of the FriendbookUser documents above, you can then execute another operation to get the associated documents. For instance, I could ask Couchbase to give me the documents for keys 2, 3, and 1031 (as a batch operation). This would give me the documents for each friend. I can then repeat that for Updates, and so on.

The benefit to this is speed: key/value operations are very fast in Couchbase, and you will likely be getting values directly from RAM.

The drawback is that it involves at least two operations (get FriendbookUser document, then get the Updates). So this may involve some extra coding. It may also require you to think more carefully about how you construct document keys (more on that later).

N1QL

In Couchbase, you have the ability to write queries using N1QL, which is SQL for JSON. This includes the JOIN keyword. This allows me to, for instance, write a query to get the 10 latest updates and the users that correspond to them.

public List<Update> GetTenLatestUpdates()
{
    var n1ql = @"SELECT up.body, up.postedDate, { 'id': META(u).id, u.name} AS `user`
        FROM `sqltocb` up
        JOIN `sqltocb` u ON KEYS up.`user`
        WHERE up.type = 'Update'
        ORDER BY STR_TO_MILLIS(up.postedDate) DESC
        LIMIT 10;";
    var query = QueryRequest.Create(n1ql);
    query.ScanConsistency(ScanConsistency.RequestPlus);
    var result = _bucket.Query<Update>(query);
    return result.Rows;
}

The result of this query would be:

[
  {
    "body": "Autem occaecati quam vel. In aspernatur dolorum.",
    "postedDate": "2017-02-02T16:19:48.7812386-05:00",
    "user": {
      "id": "c5f05561-9fbf-4ab0-b68f-e392267c0703",
      "name": "Bob Johnson"
    }
  },
  {
    "body": "Nostrum eligendi aspernatur enim repellat culpa eligendi maiores et.",
    "postedDate": "2017-02-02T16:19:45.2792288-05:00",
    "user": {
      "id": "7fc5503f-2092-4bac-8c33-65ef5b388f4b",
      "name": "Steve Oberbrunner"
    }
  },

  // ... etc ...
]

N1QL allows you to have great flexibility in retrieving data. I don’t have to be restricted by just using keys. It’s also easy to pick up, since it’s a superset of SQL that SQL Server users will be comfortable with quickly. However, the tradeoff here is that indexing is important. Even more so than SQL Server indexing. If you were to write a query on the Name field, for instance, you should have an index like:

CREATE INDEX IX_Name ON `SocialMedia` (Name) USING GSI;

Otherwise the query will fail to execute (if you have no indexing) or it will not be performant (if you only have a primary index created).

There are pros and cons in deciding to use referencing or not. The values in friends and user are similar to foreign keys, in that they reference another document. But there is no enforcement of values by Couchbase. The management of these keys must be handled properly by the application. Further, while Couchbase provides ACID transactions for single document operations, there is no multi-document ACID transaction available.

There are ways to deal with these caveats in your application layer that will be discussed further in later blog posts in this series, so stay tuned!

Key design and document differentiation

In relational databases, rows of data (typically, not always) correspond to a primary key, which is often an integer or a Guid, and sometimes a composite key. These keys don’t necessarily have any meaning: they are just used to identify a row within a table. For instance, two rows of data in two different tables may have the same key (an integer value of 123, for instance), but that doesn’t necessarily mean the data is related. This is because the schema enforced by relational databases often conveys meaning on its own (e.g. a table name).

In document databases like Couchbase, there isn’t anything equivalent to a table, per se. Each document in a bucket must have a unique key. But a bucket can have a variety of documents in it. Therefore, it’s often wise to come up with a way to differentiate documents within a bucket.

Meaningful keys

For instance, it’s entirely possible to have a FriendbookUser document with a key of 123, and an Update document with a key of 456. However, it might be wise to add some more semantic information to the key. Instead of 123, use a key of FriendbookUser::123. The benefits to putting semantic information in your key include:

  • Readability: At a glance, you can tell what a document is for.

  • Referenceability: If you have a FriendbookUser::123 document, then you could have another document with a key FriendbookUser::123::Updates that has an implicit association.

If you plan on using N1QL, then you may not need keys to be this semantically meaningful. In terms of performance, the shorter the key is, the more of them can be stored in RAM. So only use this pattern if you plan on making heavy use of key/value operations instead of N1QL queries.

Discriminator fields

When using N1QL, another tactic that can be used in addition to or instead of meaningful keys is to add field(s) to a document that are used to differentiate the document. This is often implemented as a type field within a document.

{
    "address" : "1800 Brown Rd",
    "city" : "Groveport",
    "state" : "OH",
    "type" : "address"
}

There’s nothing magical about the type field. It’s not a reserved word within a document and it’s not treated specially by Couchbase Server. It could just as easily be named documentType, theType, etc. But it can be useful within your application when using N1QL to query documents of a certain kind.

SELECT d.*
FROM `default` d
WHERE d.type = 'address'

You may even take it a step further and add an embedded object to your documents to act as a kind of faux 'meta data':

{
    "address" : "1800 Brown Rd",
    "city" : "Groveport",
    "state" : "OH",
    "documentInfo" : {
        "type" : "address",
        "lastUpdated" : "1/29/2017 1:31:10 PM",
        "lastUpdatedBy" : "mgroves"
    }
}

That may be overkill for some applications. It’s similar to a pattern I’ve seen in relational databases: a 'root' table to simulate inheritence within a relational database, or perhaps the same fields tacked on to every table.

Conclusion of part 1

This blog post covered data modeling using denormalization, data modeling using referencing, key design, and discriminating fields. Modeling data in a document database is a thought process, something of an art form, and not a mechanical process. There is no prescription on how to model your data in a document database: it depends greatly on how your application interacts with your data.

You can get the source code for the entire blog series on GitHub now, parts of which were featured in this blog post. If you have questions about various parts of that code, feel free to leave a comment below, or open an issue on GitHub.

Stay tuned for the next blog in the series, where data and data migration will be discussed.

If you have any questions, please leave a comment below, contact me on Twitter, or use the Couchbase Forums.

This is a repost that originally appeared on the Couchbase Blog: .NET Core List, Queue, and Dictionary Data Structures backed by Couchbase.

The addition of the sub-document API to Couchbase 4.5 has paved the way for efficient data structure support in Couchbase.

In this blog post, I’m going to show a demo of three types of data structures you can use with the Couchbase .NET SDK:

  • List - a list of objects, basically a List<T> backed by Couchbase

  • Queue - a queue of objects, basically a Queue<T> backed by Couchbase

  • Dictionary - a dictionary of objects, basically a Dictionary<K,T> backed by Couchbase

I’ll also discuss a little bit how this works behind the scenes.

You can play along at home if you like. The source code for this blog is available on GitHub, and Couchbase Server is free to download (developer previews of version 5 are currently available monthly).

List

A List<T> is a .NET data structure that is held in memory. With the data structures provided by the Couchbase .NET SDK, you can store it in a Couchbase document.

To create a Couchbase-backed List:

var list = new CouchbaseList<dynamic>(bucket, "myList");

The string "myList" corresponds to the key for the document that will contain the list. When using CouchbaseList<T>, a single document with that key will be created (if one doesn’t exist already). If a document by that key already exists, CouchbaseList will use it.

You can now add/remove items from the list and that will all be persisted to the document. You can also perform other operations like getting a count of the items in the list.

// add 10 objects to the list
for(var i = 0; i < 10; i++)
    list.Add(new { num = i, foo = "bar" + Guid.NewGuid()});

// remove an item from the list by index
list.RemoveAt(5);

// show an item from the list by index
Console.WriteLine("5th item in the list: " + list[5].foo + " / " + list[5].num);

The above code would result in a document with a key "myList" that looks like below. Notice that the item with num of 5 is not listed, because it was removed.

There’s something subtle in the above example that needs to be pointed out. Notice that I used var item = list[5]; and then item.foo and item.num in the WriteLine. If I used list[5].foo and list[5].num directly, that would result in two different subdocument calls to Couchbase. Not only is this less than optimal efficiency, but it’s possible for the values to change between the two calls.

[
  {
    "num": 0,
    "foo": "bara1fd74ee-a790-4a0f-843c-abe449cb8b1d"
  },
  {
    "num": 1,
    "foo": "bardc1d8f9a-4e93-46f9-b8ae-ec036743869e"
  },
  {
    "num": 2,
    "foo": "bar9a60abe9-1e04-4fba-bd1f-f1ec39d69f56"
  },
  {
    "num": 3,
    "foo": "bar9566605b-7abf-4a0c-aa9d-63b98ce86274"
  },
  {
    "num": 4,
    "foo": "bar6261323f-de50-42a7-a8a7-6fcafb356deb"
  },
  {
    "num": 6,
    "foo": "bar13832bcb-2aa0-491a-a01f-1d496f999ffc"
  },

  // ... etc ...
]

Queue

Very similar to List, you can create a Couchbase-backed queue:

var queue = new CouchbaseQueue<dynamic>(bucket, "myQueue");

A queue is stored just like a list. The difference is that the ordering is significant, and this is reflected by the operations you perform on a queue: Enqueue and Dequeue.

for(var i = 0; i < 3; i++)
    queue.Enqueue(new { num = i, foo = "baz" + Guid.NewGuid()});

// dequeue
var item = queue.Dequeue();
Console.WriteLine("item num " + item.num + " was dequeued. There are now " + queue.Count + " items left in the queue.");

The above code would result in a document with a key "myQueue" (see JSON below). Notice there is no object in the array with num "0" because it was dequeued.

[
  {
    "num": 1,
    "foo": "baz64bb62b6-bf23-4e52-b584-d2fa02accce6"
  },
  {
    "num": 2,
    "foo": "baz0a160bd9-aa7b-4c45-9e19-d1a3d982a554"
  }
]

Dictionary

Hopefully you’re seeing a pattern now. To create a dictionary:

var dict = new CouchbaseDictionary<string,dynamic>(bucket, "myDict");

Again, a document will be created with the given key. The operations that can be performed include Add, Remove, and the indexer [] operation.

for(var i = 0; i < 5; i++)
    dict.Add("key" + Guid.NewGuid(), new { num = i, foo = "qux" + Guid.NewGuid()} );

// print out keys in the dictionary
Console.WriteLine("There are " + dict.Keys.Count + " keys in the dictionary.");
foreach(var key in dict.Keys)
    Console.WriteLine("key: " + key + ", value.num: " + dict[key].num);

A dictionary document looks like:

{
  "key5aa2520d-123c-4fca-b444-b0cb8846d46e": {
    "num": 0,
    "foo": "qux93b197dc-f175-4246-a38d-7b080eb9bea0"
  },
  "key55dee298-14c6-4da7-97a8-66c69d7e8a70": {
    "num": 1,
    "foo": "quxa593ee4c-682c-402d-887b-3f09f029e9b6"
  },
  "key3386afcf-7b70-4e4d-b9ae-6defbca33fe7": {
    "num": 2,
    "foo": "qux1259ae94-1008-4e1f-86a1-bfbd0873b09b"
  },
  "key2bc8c451-f125-4282-9fb4-7ea15f4b3168": {
    "num": 3,
    "foo": "qux1b6fb62b-9918-46dc-9a2f-610a55d017ef"
  },
  "key3f7041f3-abd3-49c7-a373-454cbd2ac0fc": {
    "num": 4,
    "foo": "qux0a87655f-197d-4fb2-8a54-b1de6e288de4"
  }
}

A note about C# dynamic: I used dynamic to keep the code samples short and simple. In your application, you are probably better off using a real defined C# type. It all gets serialized to JSON in Couchbase, of course.

Behind the scenes

Before the subdocument API was released in Couchbase Server 4.5, these data structures were possible, of course. The catch was that you would be loading up the entire document, putting it in a list, making changes to the list, and then saving the entire document. If you have large data structures, but are only reading or making changes to a single item, this would often result in wasted time and wasted bandwidth and possibly increased contention.

The subdocument-API (which you can use directly; I covered it in the Sub-document API in Couchbase Server 4.5 with the .NET SDK (revisited) blog post) is used behind the scenes in CouchbaseList, CouchbaseQueue, and CouchbaseDictionary. So when you add an item to a CouchbaseList, for instance, only that item is being sent over the wire, not the entire list.

Some operations will still need to get the entire document. For instance, iterating through a collection using a foreach loop will retrieve the full document. Removing an item from a list will result in the full document being scanned. But if sub-document operations come along in the future to support those actions, the SDK implementations will be updated accordinging.

Summary

These data structures are another tool to help you manage your data. Since they use the sub-document API, they are generally more performant than a whole-document approach. For more detail, check out the Data Structures documentation.

Have questions? Feedback? Need help? Please visit our forums, ping me on Twitter @mgroves, or leave a comment.

J. Tower is answering all my burning questions about .NET and .NET Core

Show Notes:

J. Tower is on Twitter

Want to be on the next episode? You can! All you need is the willingness to talk about something technical.

Theme music is "Crosscutting Concerns" by The Dirty Truckers, check out their music on Amazon or iTunes.

Matthew D. Groves

About the Author

Matthew D. Groves lives in Central Ohio. He works remotely, loves to code, and is a Microsoft MVP.

Latest Comments

Twitter