CYPEX Documentation
Wechsel zwischen Dunkel/Hell/Auto Modus Wechsel zwischen Dunkel/Hell/Auto Modus Wechsel zwischen Dunkel/Hell/Auto Modus Zurück zur Startseite Support

CYPEX internals

CYPEX internals

In this section, you’ll be guided through the internals of CYPEX. You’ll get to know the basic architecture of the solution and gain some insights into how things work. It helps to understand some basic concepts, in order to use CYPEX even more efficiently.

CYPEX software architecture

Before you look at the architecture of a CYPEX app from an end user perspective, you’ll first want to understand the overall software layout:

architecture

Delivering CYPEX

CYPEX is delivered as a set of Docker containers, which makes deployment easy and efficient. In general, CYPEX can run on top of an existing, standard PostgreSQL database. There are no dependencies on external extensions.

**NOTE:**


CYPEX does support GIS data types provided by PostGIS, but that’s the only extension which is (optionally) needed. (No hard requirements).

CYPEX consists of the following containers:

Delivering

  • CYPEX GUI
  • CYPEX API
  • CYPEX data API
  • CYPEX database

Let’s take a look at each of these containers in a bit more detail.

CYPEX GUI (“renderer”)

The CYPEX GUI container contains the end-user side of the tool chain. It contains a single web application and is the main entry point for all end-users. This is what is generally known as “the renderer”.

The way it works is that it fetches a JSON document describing the application from the backend and turns it into a usable application in the browser. As previously stated, a CYPEX app is basically a giant JSON document describing the page and its interaction with the world.

As part of the container, we ship nginx, which acts as a reverse proxy for API’s. We use OpenResty to serve static data.

The following technologies are used.

**Technology: **

  • TypeScript
  • ReactJS
  • ReduxJS
  • Redux-Saga
  • nginx
  • OpenResty

Let’s now focus on the way CYPEX handles data.

CYPEX API

There are two basic APIs: The CYPEX API and the CYPEX data API. The CYPEX API provides the the following functionality:

  • Authentication services
  • List of available apps
  • Application definitions
  • Meta data
  • Administration functionality

API

The CYPEX API provides basic infrastructure and handles non-app related data using a standard REST interface (JSON).

**Technology: **

  • TypeScript
  • nodeJS

CYPEX data API

The CYPEX data API is used to serve application data. Every piece of end-user data will come from this side and not from the internal APIs.

Why is that necessary? PostgREST generates the API automatically from the database. This is due to various reasons:

  • CYPEX is standard-compatible
  • It relies on standard tooling
  • Automatic documentation of app side
  • Reliable and battle-tested

PostREST exposes exactly one schema as an auto-generated API. That’s one (but not the only) reason why CYPEX uses views to abstract access to data. Using views exposed as a single schema by PostgREST, you can …

  • Handle security better (no need to modify permissions on base tables)
  • Support apps working on multiple schemas
  • Be more robust when it comes to changing column names, etc.

PostREST

PostgREST is standard software widely used in the community.

CYPEX database

Finally, there’s the database container. Strictly speaking, any PostgreSQL database is fine. However, to improve the user experience we’ll also ship PostgreSQL as part of the entire package. This makes it a lot easier for people who aren’t yet running PostgreSQL at scale.

Upgrading CYPEX

If you want to upgrade, all you have to do is to run new containers. Usually no further action is needed. However, we’ll provide change scripts in case they’re necessary in order to upgrade.

Please contact our support team for further information.

CYPEX internal data structure

In this section, we’ll dive into the SQL structure of CYPEX itself and learn how data is stored inside the tooling. Here’s the main data structure:

internal data structure

The purpose of the tables above is as follows:

Table cypex_api_internal.t_user

In CYPEX, there are three different types of users:

  • Standard PostgreSQL Users
  • Integrated users
  • LDAP users

It can also be the case that a PostgreSQL user in the background is mapped to various email addresses in the frontend for authentication purposes.

Table t_file, t_filegroup, t_file_type:

CYPEX allows users to upload files. Since it’s vital to maintain transactional integrity and expose those files via a REST interface, you can’t just store them in a directory. Files in a file system can hardly be protected and in that case, you can’t properly handle permissions. In addition to that, it’s important to maintain the ability to back up an entire CYPEX deployment using a single database backup.

Therefore all files are stored in a table (t_file). In CYPEX, files have types and belong to groups. This is allows it to handle groups and permissions more easily and in a more organized way.

Table t_language

CYPEX supports various languages. The language table contains the supported languages. The table is mainly used to ensure referential integrity across the system. Note that not all texts are stored on the database side. Some texts are also part of the JSON document sent to the rendering engine.

Table t_module

CYPEX is structured in 3 levels. Note that the levels aren’t immediately visible to the end user. Behind the scenes, there is a hierarchy of “Modules -> Objects (“tables CYPEX is keeping track of”) -> Objects views (= “queries”).

The t_module table is the fundamental building block to represent this hierarchy at the database level.

Table t_object

Objects are basically “tables CYPEX is tracking”. Tables are a fundamental building block of any relational database. It can very well be the case that a single relational model is the foundation for more than one CYPEX application. Therefore an application has to know which tables to track in order to store metadata (column names, etc).

At the object level, CYPEX also tracks whether workflows and constraints are enforced inside the metadata. CYPEX enforces workflows by deploying triggers and constraints on the underlying tables.

Table t_object_field

CYPEX needs a lot of metadata to fuel the default rendering process. Therefore a lot of information about fields is stored in the t_object_field table. This includes, but isn’t limited to: field names, field orders, visibility, etc.

Table t_object_state

In case workflows are enabled for an object, you need to store the states an object can have (“Status” in the GUI example - see the section “Creating Workflows” above). As an example: A contract can be “offered”, “signed”, “rejected” and so on. The states associated with an object are in t_object_state. States can be added on the fly using the CYPEX GUI.

Table t_object_view

The CYPEX core engine knows the concept of “object views”. To the end user “object views” are presented as “queries” in the model builder. The idea is to have an abstraction layer between tables and the way data is presented. This is especially important in case of aggregations, default filters, etc. Metadata is associated with every object view (names, translations, etc.).

Table t_object_view_field

Similar to the way object columns are treated, we also keep track of object view columns. Object views (= queries) can have completely different columns than the underlying object does. (As an example, think of aggregations).

Table t_state_change

States are the foundation of every workflow. State changes are a way to move from one state to another. Somebody might move a contract from “offered -> signed” (= ”sign”) - but not from “signed -> offered”. Control this using database side constraints.

However, often the next state has to be calculated using functions. The way to do that is to use “pre-funcs” and “post-funcs”. The “pre-func” is called before a state is left (to determine where to go in the state machine). The post-func is called before entering the target state. We use standard PostgreSQL stored procedures to handle this behavior.

Note that the GUI does not fully support this concept yet.

Table t_state_requirement

It can happen that states need certain preconditions. As an example: A contract can only be in state “signed” if there is pricing information entered and so on. The t_state_requirements table defines which of those requirements have to be met.

Table t_text

Texts can be assigned to pretty much everything. This includes objects, columns, states, state changes and a lot more. In CYPEX, all configuration tables share a common sequence, providing us with a system-wide unique ID. The advantage is that every piece of information can be identified clearly in a unique way. Therefore it’s easy to attach texts in various translations to everything stored in the database.

The t_text table is the place to store all those translations for all objects in the CYPEX metadata.

Table t_ui

A single database might serve more than just one UI. Let’s imagine a webshop: The end user part (“customers”) will run application A while backoffice people will operate using application B. Both applications will access the same underlying data.

The way that’s represented in CYPEX is by allowing multiple GUIs for the very same data to exist at the same time. In general, GUIs can be assigned to roles which means that a group of people can share the same graphical user interfaces.

Table t_ui_history

To allow for proper versioning all histories of graphical user interfaces are kept. This allows CYPEX to support releases, which allow superusers/admins to change applications while they are actually in use.

So far, we’ve discussed application-related metadata. In this section we’ll use

Table t_user

We’ve already discussed internal users. However, there is more: You can map internal users to database users, as shown in this ER diagram:

t_user

But the story isn’t as simple as it might seem:

Users

Table t_user_ldap

In case LDAP authentication is enabled, you have to map LDAP users to internal users (= database side). You need LDAP support to handle single-sign-on.

Table t_user_integrated

Integrated users support the idea of allowing multiple logins mapping to the same PostgreSQL user. Keep in mind that permissions on the “CYPEX Data API” side are controlled by the PostgreSQL user side. By defining an integrated user, it’s possible to map various logins to the same backend user. The same is true for LDAP as well.

Table cypex_log.t_user

In CYPEX, security is of the utmost importance. Therefore all access to the application is tracked and audited. The cypex_log schema facilitates tracking and auditing.

Application structure

This section includes a brief description of the underlying architecture used by CYPEX. It’s presented from an end-user point of view. It’s based on PostgreSQL, and stores a lot of metadata inside the database. This includes:

  • Workflow definitions
  • Object descriptions
  • Pre-rendered definitions
  • User mappings

All other components are controlled based on this information. The end product is a JSON document, which is sent to the client. The information is then rendered on the client. To make the process efficient, the JSON document is pre-computed and stored in PostgreSQL as well.

Let’s take a look at the basic architecture:

architecture

As mentioned above, the “end product” is a JSON document rendered by the browser. To produce this document, we use middleware which creates the desired data. The core idea is to have everything ready for immediate use, to maintain good performance.

Fetching data is done using an API interface which is generated by inspecting the data model as well as the server side code. The API can also be accessed directly in case you want to write custom code.

State machine internals

Let’s spend some time on the internals of the state machine. Basically all this metadata is stored in tables which can be found in the “cypex” schema. The state machine will create triggers on the data tables to ensure that data has to be correct on all levels.

Keep in mind: Most people will access data directly using their web browser. However, it’s also possible to just skip the GUI and talk directly to the API generated by CYPEX. Therefore you can enforce constraints and permissions, etc. at the lowest level possible. It’s necessary to make absolutely certain that nobody can evade the business rules enforced by the model.

The integrity of data is one of the most important assets of a professional relational database. Therefore we do everything we can to protect your data. Let’s have a look at an example: If an invoice is either “paid” or “unpaid”, we do not allow “maybe” or “who knows”. At the end of the day, you want to be “paid” and CYPEX enforces data integrity by all possible means. Fortunately, PostgreSQL provides us with the transactional foundation we need to achieve that. All layers built on top of PostgreSQL (= GUI, API, etc.) will automatically inherit PostgreSQL’s restrictions and business rules.

To show you what this means in real life, we’ve included a code snippet:

cypex=# \d todo.t_todo
                             	Table "todo.t_todo"
  Column   |  Type   | Collation | Nullable |             	Default
-----------+---------+-----------+----------+--------------------------------
 id        | integer |       	| not null | nextval('todo.t_todo_…)
 tstamp    | date	 |       	|          | now()
 todo_item | text	 |       	| not null |
 status    | text	 |       	|          | 'created'::text
Indexes:
	"t_todo_pkey" PRIMARY KEY, btree (id)
Check constraints:
	"cypex_761c0b39d568e31024e53b9c3eadb8c5" CHECK (status = ANY ('{created,accepted,success,failed,rejected}'::text[]))
Triggers:
	zzz_e92d74ccacdc984afa0c517ad0d557a6 BEFORE INSERT OR DELETE OR UPDATE ON todo.t_todo FOR EACH ROW EXECUTE FUNCTION cypex.trig_enforce_state_change('status')

As you can see, CYPEX generates a trigger with a unique name to ensure consistency and enforces those states’ changes. It comes with some performance penalty, but is necessary to maintain integrity. Also keep in mind, if workflows are changed AFTER loading a lot of data changes to the workflow, it might be time consuming - because PostgreSQL has to revalidate those constraints.

We strongly advise CYPEX users against changing those constraints manually. Instead, use the CYPEX-internal functions to make sure that the metadata catalog stays consistent.