New month, new release! In addition to small changes such as new tests and an update to placeholder search, we are pleased to announce that Meilisearch v0.15.0 brings with it the possibility to import and exports dumps of your database. By adding snapshots and dumps, we feel that we have reached an important milestone in our data management capabilities.

Snapshot versus dump

In the previous release, we introduced snapshots. Snapshots make it possible to schedule the creation of hard copies of your database, which include everything: all of your indexes, documents, settings, update history, etc. This feature is intended mainly as a safeguard—ensuring that if some failure occurs, you're able to relaunch Meilisearch quickly and efficiently without going through the hassle of re-indexing documents. Because snapshots are hard copies of the database after it has been processed by Meilisearch, they are not compatible between versions.

The same is not true of a dump. Like a snapshot, a dump is a copy of your dataset. However, whereas a snapshot is version-specific and limited to use with Meilisearch, a dump is a copy that can be used with any version of Meilisearch. Sounds pretty good, huh? There is a downside, however: if you start Meilisearch from a dump, it will need to index all of your documents, a process that takes up time and overhead. In addition, a dump will not copy certain database-specific information, such as your update history. This is because, technically speaking, a dump isn't an exact copy—more like a blueprint that allows you to create an identical dataset.

To summarize, snapshots are highly efficient but not portable—even between different versions of Meilisearch. Dumps, on the other hand, are highly portable but not very efficient, as frequently launching Meilisearch from a dump would cause your performance to suffer.

Dumps in details

To create a dump of your dataset, you need to use the appropriate http route: POST /dumps. Using that route will trigger a dump creation process. Creating a dump is an asynchronous task that takes time based on the size of your dataset. A dump uid (unique identifier) is returned to help you track the process.

$ curl -X POST 'http://localhost:7700/dumps
Triggers a dump creation process.

At any given moment, you can check the status of a particular dump creation process using the previously received dump uid, like so: GET /dumps/:dump_uid/status. Using this route, you can know whether your dump is still processing, is already done, or has encountered a problem.

$ curl -X GET 'http://localhost:7700/dumps/:dump_uid/status'
Checks the status of a dump creation process.

After your dump creation process is done, the dump file is created and added in the dump folder. By default, this folder is /dumps at the root of your Meilisearch binary, but this can be customized. Note that if your dump folder does not exist when the dump creation process is called, Meilisearch will create it.

./meilisearch --dumps-folder /myDumpFolder
Sets a custom folder for dump exports.

Once you have exported a dump, which is a tar.gz file, you are now able to use that dump to launch Meilisearch. As the data contained in the dump needs to be indexed, the process will take some time to complete. Only when the dump has been fully imported will the Meilisearch server start, after which you can begin searching through your data.

./meilisearch --import-dump /myDumpFolder/12345678.tar.gz
Imports a dump and launches Meilisearch.

Placeholder search update

Previously the only way to trigger a placeholder search was to make a search with no query (or where your query is null).

Placeholder returns documents sorted by the ranking rules specified by the user, without inputting any query words. It is compatible with faceting and filtering.

From this release on, making a search with an empty string will also trigger placeholder search.

Having a search query limits the choices.

Minor changes

  • Upgrade of actix-web to v3, fixing memory leaks and crashes!
  • Addition of more tests to ensure everything is working well
  • Pest update

Contributions

A big thanks to @robjtede for his contribution in this release.

We are always eager to hear more feedback from our users and contributors! Feel free to come and talk with us using the method you prefer.

We are also thrilled by the supportiveness of our community, which seems to be constantly growing in both stars and users. Let's keep it up!

Thank you for using our search engine,