ontobox.io

The Theory of Document Modeling

Anton Malykh, Andrei Mantsivoda
box@ontobox.io

πŸ•‘Β  4 September 2017

Abstract. In this paper, the concept of a locally simple model is introduced. Locally simple models are arbitrarily complex models built from relatively simple components. A lot of practically important domains can be described as locally simple models, for example, the business models of enterprises and companies.
Up to now, research in human reasoning automation has been mainly concentrated around the most intellectually intensive activities, such as automated theorem proving. On the other hand, a 'job', as a component of the retailer business model, is much simpler and can be modeled and automated in a more easy way. At the same time, the whole retailer model as an integrated system of such 'jobs' is extremely complex.
In this paper, we offer mathematical elaboration of the general conception of locally simple models. This formal system is intended for modeling a wide range of practical domains. Therefore, we must also take into account the perceptual and psychological issues. Logic is elitist, and if we want to attract as many people as possible to our models, we need to hide this elitism behind some metaphor, to which the general public are accustomed. As such a metaphor, we use the concept of a document, so our locally simple models are called document models. Document models are built in the paradigm of semantic programming. This allows us to achieve another important goal – to make the documentary models executable.
Executable models are models that can act as practical information systems. Thus, if our model is executable, then the programming phase can be skipped. The direct use of a model, instead of programming, brings important advantages, for example, a drastic cost reduction for development and maintenance. Moreover, since the model is well and sound, and not dissolved within programming modules, we can directly apply AI tools, in particular, machine learning. This significantly expands the options for automation and robotization of management and control activities.

Keywords: locally simple model, document model, semantic modeling, smart contract.

Introduction

One of the important conclusions from the classic works on semantic programming [1], [2] is that the description of actions in a particular domain can be done not only in an imperative style (based on programming languages), but also in the declarative one – through modeling the domain in some logical system. Of course, such a logical system must satisfy a number of requirements, the main of which is the capability of interpreting logical descriptions as procedures. The authors of semantic programming have formulated a general mathematical approach, explaining how such models might look. We will call them executable, because they not only declaratively describe the domain, but also declare a set of actions that can be performed within the domain. If the model is executable, it can work directly as an information system.

Semantic domain modeling is similar to developing a specification of some information system. A semantic model is the result of this work. The executability of a model means that as soon as we have specified some IT system, the programming stage is not needed, since its functionality is automatically derived from the semantic model itself.

The global consequence of this idea is that

We can replace programming by modeling.

The results of such a substitution are quite impressive:

The first point is that semantic modeling, where applicable, has significant economic advantages over classical programming.

The second point allows you to work in your domain without such intermediaries as programmers, since domain specialists can directly develop models.

The third point is very important in the light of the 4th industrial revolution, which we witness today. It means that the roles (jobs) formalized in the model can be robotized.

Thus, if we manage to apply semantic modeling to some industry, this can have a disruptive impact on it, and give significant competitive advantages.

Locally Simple Models

Knowledge modeling and automated reasoning are parts of mathematical logic and artificial intelligence with more than half a century of history. The greatest impetus was the invention of the resolution principle in 1965 [3]. It made it possible to achieve very significant results in automated theorem proving [4].

Tim Berners-Lee proposed to use the potential of knowledge modeling on the Web [5]. The idea was to apply logical means to achieve more efficient data management on the Web and to create automated agents. From this idea a new direction of research has grown – the Semantic Web [6]. A variety of description logics were used as its logical basis [7], [8].

Unfortunately, the Semantic Web did not achieve the goals set by Berners-Lee. It has become a rather narrow mathematical discipline with minimal influence on the outside world and very weak dissemination in practical domains. The major problem with the Semantic Web, in our opinion, is that it considers knowledge processing as a purely mathematical problem, whereas in fact it can only be solved at an interdisciplinary level. The key problems of knowledge processing are perceptual in nature and lie in the field of cognitive psychology, so finding "yet another" logical formalism does not improve the situation. We have a great spectrum of brilliant logical techniques, but little of them enjoy practical significance.

Here we as investigators fall into an 'intellectuality' trap, when trying to model our own thinking instead of looking around. We fight for automated theorem proving, we develop complex knowledge models and do not pay attention to the fact that the bulk of tasks that people around us trying to solve are much simpler. For example, a retailer company as a business model is an extremely complex system. But this model is designed so that each of its components ('jobs') is quite understandable for a fairly wide range of people and can be modeled logically. The complexity of the retailer's model arises when these locally-simple 'jobs' are used as puzzle pieces to build a holistic business mosaic. And here the complexity can be exorbitant.

Our hypothesis is that a huge number of practically significant models in our world are locally simple. But we must ensure that a logical formalism we use is comprehensible for the wide range of users. Otherwise, the fate of the Semantic Web will await us. Logic is elitist and difficult to understand. So, ideally, people should not even realize that they work within a formal logical framework. The solution here is to find a metaphor, which is familiar to people, and which would allow them to correctly operate within locally simple models without the need to study logic. As such a metaphor, we have selected the concept of a document.

Document models

In this paper, we define the notion of a document model. A document model is a version of an executable semantic model, which is based on the metaphor of the document as the basic construct of logical descriptions. Document modeling implements our concept of knowledge management.

First, the document models are executable. This makes the programming stage unnecessary, since the model itself can play the role of an information system.

Second, the model uses the notion of a document as a metaphor. The document model is organized as a collection of logical structures that can be interpreted as "ordinary" documents, while preserving all the advantages of semantic modeling and artificial intelligence. On the other hand, for users the work with this model can be established as the conventional work with documents.

Below we introduce the formal definition of a document model.

Basic Types

Let

\[\mathcal{B} = \langle B_1,\ldots, B_k;\; \Omega \rangle,\]

be a multy-sorted algebraic system, which defines the basic data types, where \(B_i\) are the main data sets. In practice they can be strings, integers, reals, images, video, etc. The signature

\[\Omega = \langle \Omega_P, \Omega_F, \gamma\rangle\]

consists of

All elements of all sorts are distinguished (represented by constants, that is, 0-ary functional symbols from \(\Omega_F\)). Constants are denoted by \(c, c_i\). Elements corresponding to constants are denoted by \(\mathbf{c}, \mathbf{c}_i\).

By

\[\mathbb{B} = \{\mathbf{b}^1,\ldots, \mathbf{b}^k, \mathbf{any}\}\]

we denote the set of names of the basic datatypes. The name \(\mathbf{any}\) denotes the type of all elements.

All predicate and functional symbols are typed. The type of a predicate symbol \(p, \gamma(p) = n\) is an expression

\[\langle \mathbf{b}_1,\ldots, \mathbf{b}_n\rangle,\]

where \(\mathbf{b}_i\in \mathbb{B}\) means that the \(i\)th argument of the predicate corresponding to \(p\) must belong to the basic set \(B_i\). The type of a functional symbol \(f, \gamma(\,f) = n\) is an expression

\[\langle \mathbf{b}_1,\ldots, \mathbf{b}_n, \mathbf{b}_{n+1}\rangle,\]

\(\mathbf{b}_i\in \mathbb{B}\), where \(\mathbf{b}_i\), \(1\leq i \leq n\) determine the types of arguments and \(\mathbf{b}_{n+1}\) determines the type of result of the corresponding function.

The notions of a term, an atomic formula and the term type are defined inductively as usual.

Sequences

A sequence is an expression

\[(e_1,\ldots,e_m),\]

where \(e_i\) are some elements. Below, constants from \(\Omega_F\) and references to documents will play the role of these elements. The following equalities hold on sequences

\[ \begin{eqnarray*} (e) &= &e\\ (\ldots, (e_1, \ldots, e_k), \ldots) &= &(\ldots, e_1, \ldots, e_k, \ldots) \end{eqnarray*} \]

The first equality indicates that a singleton sequence is not distinguishable from the element itself. The second equality says that sequences are flat (without nesting, unlike, for example, lists).

The empty sequence with no elements is denoted by \((\,)\).

To determine the number of elements in a sequence, let us introduce the notion of a cardinality. We define the following cardinalities:

By \(\mathbb{C}\), the set of all cardinalities is denoted.

Documents

A document is the main concept of a document model. The role of documents is similar to objects in the object-oriented approach (OO). The basic facts about documents:

Let us introduce a countable set of new constants

\[\mathbb{I} = \{id_1, id_2, \ldots \}\]

which is called the set of names (identifiers). This set is divided into two disjoint countable subsets of form names \(\mathbb{I}_F\) and document field names \(\mathbb{I}_D\): \[\mathbb{I}_F \cap \mathbb{I}_D = \emptyset\] \[\mathbb{I}_F \cup \mathbb{I}_D = \mathbb{I}\]

In what follows, the form names will determine the types of documents. So, we can define the set of all datatypes as the union of basic type names and form names:

\[\mathbb{B}\cup\mathbb{I}_F\]

A document field description is a triple

\[\mathbf{d} = \langle d, \mathbf{g}, \mathbf{c} \rangle,\]

where \(d\in\mathbb{I}_D\) is a field name, \(\mathbf{g}\in\mathbb{B}\cup\mathbb{I}_F\) its type, and \(\mathbf{c}\in\mathbb{C}\) its cardinality. Document field names will be denoted by \(d\), possibly with indices. The document field description corresponding to \(d\) will be denoted by \(\mathbf{d}\).

For convenience, we use a program-like notation in field definitions. For instance,

  age:Int!
  children_names:String*

instead of \(\langle age, Int, !\rangle\) and \(\langle children\_names, String, *\rangle\), respectively. Informally, the age is represented by an exactly one integer, and children names by an arbitrary sequence of strings.

Now let us introduce a countable set of new constants, which are called document states:

\[\mathbb{S} = \{\mathbf{s}^1, \mathbf{s}^2, \ldots\}\]

A transaction description is a triple

\[\mathbf{p} = \langle \mathbf{s}_{in}, \mathbf{s}_{out}, P(o) \rangle\]

Here

The transaction code \(P(o)\) is a sequence of guarded operations:

\[P(o) = \langle G_1(o)\rightarrow P_1(o); \ldots; G_k(o)\rightarrow P_k(o) \rangle\]

Its only parameter is a document \(o\), which determines the transaction. A guarded operator sequence has the following informal semantics: the sequence is equal to the left-most \(P_i(o)\), for which the guard \(G_i(o)\) is true.

This definition determines the transaction of a document \(o\) having the state \(s_{in}\). The transaction changes its state to \(s_{out}\) and performs the set of instructions generated by executing \(P(o)\).

For awhile we do not specify the language for \(P(o)\). Conceptually, it should be a very simple and weak language, ensuring the elementary nature of transactions. Simplicity of language is the most important feature of locally simple models that we will build.

Let us define now the notion of a document form, which determines the structure of documents of the same type. The form of a document is

\[\mathbf{f} = \langle \,f, \{\mathbf{d}_1,\ldots,\mathbf{d}_n\}, \{\mathbf{s}_1,\ldots,\mathbf{s}_m\}, \{\mathbf{p}_1\ldots\mathbf{p}_k\} \rangle\]

Here

We are ready to introduce the main concept of this paper, the notion of a document. To identify and access documents, their enumeration is used. To enumerate documents we use a copy of the natural number set \(\mathbb{N}\). The numbers enumerating documents will be called references. To distinguish references from ordinary integers, we will write them with the prefix \(id\), for example, \(id\mbox{:}n_1, id\mbox{:}5\). The document corresponding to the reference \(id\mbox{:}n\) is denoted by \(\nu\mbox{:}n\). If we denote by \(\mathbb{D}\) the set of all documents, then

\[\nu:\mathbb{N}\rightarrow \mathbb{D}\]

Next we define the notion of a document field, which is determined as a pair

\[\mathbb{d} = \langle d, w \rangle\]

where \(d\in\mathbb{I}_D\) is a field name, and \(w\) is a sequence of admissible values. The admissible values of fields are the elements of the basic sets \(B_1,\ldots, B_k\) and references from \(\mathbb{N}\).

We say that an element \(e\) has the type \(\mathbf{g}\) w.r.t. the enumeration \(\nu\), if one of the following conditions holds:

  1. \(\mathbf{g} = \mathbf{any}\)
  2. \(\mathbf{g} = \mathbf{b}^i\) and \(e\in B_i\)
  3. \(\mathbf{g} = f\), \(e = id\mbox{:}n\), and \(\,f\) is the form name of the document \( \nu\mbox{:}n\).

A document \(\mathbb{o}\) is a structure

\[\mathbb{o} = \langle\, f, \{\mathbb{d}_1,\ldots,\mathbb{d}_n\}, \mathbf{s}\rangle\]

where \(f\in\mathbb{I}_F\) is a form name, \(\{\mathbb{d}_1,\ldots, \mathbb{d}_n\}\) is a set of fields, and \(\mathbf{s}\in\mathbb{S}\). We say that the document \(\mathbb{o}\) has the state \(\mathbf{s}\), and denote it by \(\mathbb{o}[\mathbf{s}]\).

Let \(\sigma\) be a syntactic structure (e.g., a form or a document). We define operations \(id_F(\sigma)\) and \(id_D(\sigma)\) equal to the set of all form names and field names occurring in \(\sigma\), respectively. Let us also define

\[ \begin{eqnarray*} id_F(\{\sigma_1,\ldots,\sigma_m\}) &=& id_F(\sigma_1)\cup\ldots\cup id_F(\sigma_m)\\ id_D(\{\sigma_1,\ldots,\sigma_m\}) &=& id_D(\sigma_1)\cup\ldots\cup id_D(\sigma_m) \end{eqnarray*} \]

The signature of a document model is a finite set of document forms

\[\mathbb{M} = \{\mathbf{f}_1, \ldots, \mathbf{f}_l\}\]

closed w.r.t. the names: \(id_F(\mathbb{M}) \subseteq \{f_1, \ldots, f_l\}\), where \(f_i\) is the name of the form \(\mathbf{f}_i\).

A document model is a finite set of documents

\[\mathcal{M} = \langle \{\mathbb{o}_1, \ldots \mathbb{o}_m\}, \nu\rangle\]

with the function \(\nu\), which determines the enumeration of documents.

We say that \(\mathcal{M}\) is a model of a signature \(\mathbb{M}\), if for each document \(\mathbb{o}\in\mathcal{M}\), having a form named \(f\), the following conditions hold:

  1. \(\mathbf{f}\in\mathbb{M}\), that is a form with the name \(f\) is defined in the signature \(\mathbb{M}\);
  2. For each field \(\mathbb{d} = \langle d, w \rangle\) of the document \(\mathbb{o}\), the form \(\mathbf{f}\) contains the description \(\mathbf{d} = \langle d, \mathbf{g}, \mathbf{c} \rangle\), the size of the sequence \(w\) corresponds to the cardinality \(\mathbf{c}\), and each element from \(w\) has the type \(\mathbf{g}\);
  3. The state \(\mathbf{s}\) of the document \(\mathbb{o}\) is an admissible set of the form \(\mathbf{f}\).

Proposition 1.Β  The following expressions hold:

  • \(id_F(\mathcal{M}) \subseteq id_F(\mathbb{M})\)
  • \(id_D(\mathcal{M}) \subseteq id_D(\mathbb{M})\).

Transactions

Via transactions, a model \(\mathcal{M}\) develops and modifies itself over time. Transactions are executed sequentially. Each new transaction determines the next time point of the model life cycle. To formalize this mechanism, we introduce an ordered countable set

\[\mathbb{T} = \{\mathbf{t_0}, \mathbf{t_1}, \mathbf{t_2}, \ldots \}\]

which is called the set of time points. \(\mathbf{t_0}\) is called the initial time point (a 'birth' time point). The state of the model at the time point \(\mathbf{t}_i\) is denoted by \(\mathcal{M}^{\mathbf{t}_i}\). The application of a transaction description \(\langle \mathbf{s}, \mathbf{s}^{\prime}, P(o) \rangle\) to a document \(\mathbb{o}\) is defined as follows:

Rule 1. Document transaction

\[ \frac{\mathcal{M}^{\mathbf{t}_i}[\mathbb{o}[\mathbf{s}_{in}]]\;\;\;\;\;\;\;\;\;\;\;\;\;\langle \mathbf{s}_{in}, \mathbf{s}_{out}, P(\mathbb{o}) \rangle} {\mathcal{M}^{\mathbf{t}_{i+1}}[\mathbb{o}[\mathbf{s}_{out}]]} \]

The model state \(\mathcal{M}^{\mathbf{t}_{i+1}}\) is obtained from the state \(\mathcal{M}^{\mathbf{t}_{i}}\) by the execution of instructions generated by the code \(P(\mathbb{o})\).

Rule 2 formalizes the possibility of external influence on the model. The document model, as a rule, is not isolated. It is embedded in a context, for example, the real world. The context can supply the model with various information – in the form of new documents or changed values of document fields. In practice, this can be user's data input, the publication of machine learning results etc.

The external sources of information, which amend the model, are called oracles. Interaction with the oracle is also carried out via transactions. Each act of interaction is a separate transaction that executes the code provided by the oracle. The oracle interaction rule is defined as follows:

Rule 2. Oracle interaction

\[ \frac{\mathcal{M}^{\mathbf{t}_i}\;\;\;\;\;\;\;\;\;\;\;\;\;P_{{oracle}}} {\mathcal{M}^{\mathbf{t}_{i+1}}} \]

Here \(P_{{oracle}}\) is a code given by the oracle for the execution. The model state \(\mathcal{M}^{\mathbf{t}_{i+1}}\) is obtained from \(\mathcal{M}^{\mathbf{t}_{i}}\) by the application of instructions generated by \(P_{{oracle}}\).

Rules 1 and 2 are fulfilled as follows:

  1. Code \(P\) is executed in the context of \(\mathcal{M}^{\mathbf{t}_i}\).
  2. The execution of \(P\) generates a finite set of instructions \(ins_1, \ldots, ins_k\).
  3. The instructions are applied sequentially to the model \(\mathcal{M}^{\mathbf{t}_i}\) modifying it to the state \(\mathcal{M}^{\mathbf{t}_{i+1}}\).
  4. If all instructions are successfully applied, then the rule is applicable and the model goes into state \(\mathcal{M}^{\mathbf{t}_{i+1}}\).
  5. If the execution of some instruction fails, then the rule is not applicable, and the model stays in the state \(\mathcal{M}^{\mathbf{t}_{i}}\).

Thus, the code \(P(o)\) does not directly affect the model. It generates a finite sequence of instructions. Then, these instructions are applied sequentially to the model, transferring it to a new state. The set of instructions is atomic – either all instructions are executed, or none (for example, if an error occurs during the execution of an instruction, the whole computation 'rolls back').

The two-stage execution of a transaction – (1) generating and (2) applying instructions – allows us to specify an alternative monotonically expanded model definition. Let us represent a transaction as a triple

\[ \mathbb{p}_i = \langle \mathbf{t}_i, \langle \mathbf{s}_{in}, \mathbf{s}_{out}, P(o) \rangle, [ins_1, \ldots, ins_k] \rangle \] for rule 1, and \[ \mathbb{p}_i = \langle \mathbf{t}_i, P_{oracle}, [ins_1, \ldots, ins_k] \rangle \]

for rule 2. Here \(\mathbf{t}_i\) is a time point generated by the transaction, \(\mathbb{o}\) is the applied document for the first rule, and \(ins_1,\ldots, ins_k\) are instructions executed within the transaction.

Now the model state at time point \(\mathbf{t}_n\) can be implicitly represented as a pair

\[ \langle \mathcal{M}^{\mathbf{t}_0}, [\mathbb{p}_1, \ldots, \mathbb{p}_n] \rangle, \]

where \(\mathcal{M}^{\mathbf{t}_0}\) is an initial model state (an empty model, as a rule).

Proposition 2.Β  The explicit model state \(\mathcal{M}^{\mathbf{t}_n}\) at time point \(\mathbf{t}_n\) can be obtained by the consecutive application of all instructions from the transactions \(\mathbb{p}_1, \ldots, \mathbb{p}_n\).

Instructions

To successfully work with a model, it is enough to have a quite simple set of instructions:

  1. newdoc(formname) creates an empty document of a particular form.
  2. set(doc, field, value) assigns a value to a field of a document doc.
  3. state(doc, s) sets a new state s of a document doc.

In practice, it is useful to have a wider range of instructions, but theoretically these three instructions are sufficient.

The qualities of the language, on which the transaction code \(P(o)\) is developed, play a key role. We can use languages with very different levels of expressiveness, e.g. Turing-complete languages. However, we intentionally choose a fairly simple language that ensures the solvability of the main problems and allows effective automated code analysis. This is one of the main features of locally simple modeling.

The other important feature of the language is that it must have a clear declarative semantics. It is needed, for example, when we apply AI tools to control the correctness of smart contracts. The separation of the instructions generation stage and the stage of their execution allows us to use the declarative language for forming sequences of procedural instructions. This makes the imperative part very small and manageable.

In our document model management platform, in which we have practically implemented the formal system established here, a simple declarative subset of the programming language Libretto [9] is used as such a language.

Business Processes

A business process model is a triple

\[\langle\, f, s_{beg}, s_{fin}\rangle\]

where \(s_{beg}, s_{fin}\in \mathbb{S}\) are admissible states of the form \(f\). We call them the initial and final states of the business process, respectively.

A business process implementing a model \(\langle\, f, s_{beg}, s_{fin}\rangle\) is a sequence of transactions of the document \(\mathbb{o}\) having the form \(\mathbf{f}\) (\(\mathbb{o}\) is called the main document of the business process):

\[\mathbb{o}[{s_{beg}}] \rightarrow \mathbb{o}[{s_{1}}] \rightarrow \ldots \rightarrow \mathbb{o}[{s_{n}}] \rightarrow \mathbb{o}[{s_{fin}}]\]

which starts, when \(\mathbb{o}\) has the initial state \(s_{beg}\), and moves \(\mathbb{o}\) to the final state in such a way that \(s_{i} \neq s_{fin}\) for each \(i, 1\leq i \leq n\).

Smart Contracts

A model together with transaction definitions contains all necessary tools for the introduction of smart contracts. Smart contracts implement the automated management of counterparties interaction with decentralized trust support based on cryptographic technologies (a blockchain).

Within a document model we can give the following definition of a smart contract:

Definition 3.Β  A smart contract is a business process model, the instructions of which are stored in a blockchain.

Document models provide tools for the introduction and intelligent management of smart contracts through transaction mechanisms. The only one external thing that we need is a decentralized ledger.

Conclusion

Now our main efforts are focused on the practical document modeling of real complexity. We have learned how to work efficiently with models containing tens of millions of documents and more. In particular, we apply document models in business process management services. In cooperation with our business partners, we are implementing such projects as the system of budgeting and commodity circulation for a 150-strore retailer, a CRM-model for a chain of furniture stores, a commercial reporting and personnel management in food production, an ERP-model and others.

We are also trying to integrate document models with machine learning techniques. Unfortunately, it appeared that popular approaches to machine learning – neural networks and deep learning – cannot directly work with knowledge bases. However, there are great prospects here for the logic-probabilistic methods, for example, developed by the team of E.E. Vityaev [10], [11]. For these methods, semantic models can serve as ontologies.

References

[1] Goncharov S.S., Ershov Yu.L., Sviridenko, D.I. Semantic foundations of programming. Lecture Notes in Computer Science, v.278, 1987, pp. 116-122. https://doi.org/10.1007/3-540-18740-5_28

[2] Goncharov S.S., Ershov Yu.L., Sviridenko D.I. Semantic programming. In: Information processing, Proc. IFIP 10th World Comput. Congress, Dublin, v.10, 1986. pp.1093-1100.

[3] Robinson J.A. A Machine Oriented Logic Based on the Resolution Principle. J ACM, 1965, no 12, pp.23-41. https://doi.org/10.1145/321250.321253

[4] Riazanov A., Voronkov A. The Design and Implementation of VAMPIRE. Journal AI Communications, vol. 15, Issue 2,3, 2002, pp.91--110.

[5] Berners-Lee T., Hendler J., Lassila O. The Semantic Web. Scientific American, May 2001. https://doi.org/10.1038/scientificamerican0501-34

[6] Semantic Web activity. http://www.w3.org/2001/sw/.

[7] Horrocks I., Patel-Schneider P., Van Harmelen F. From SHIQ and RDF to OWL: The making of a Web Ontology Language. Journal of Web Semantics, v.1, no 1, pp.7--26. https://doi.org/10.1016/j.websem.2003.07.001

[8] Horrocks I., Sattler U., Tobies U. Practical reasoning for expressive description logics. In: H. Ganzinger, D. McAllester, and A. Voronkov, editors, Proceedings of the 6th International Conference on Logic for Programming and Automated Reasoning (LPAR'99), no 1705 in Lecture Notes in Artificial Intelligence, Springer-Verlag, 1999, pp. 161-180. https://doi.org/10.1007/3-540-48242-3_11

[9] Malykh A., Mantsivoda A. Sistema Libretto: razrabotka web-resursov v edinoi modeli dannykh i znanii [Libretto System: Web Resources Development Based On a Holistic Data and Knowledge Model]. In: Proceedings of The 6th All-Russian Conference on Control Problems, Gelendzhik, September pp.73-75.

[10] B. Kovalerchuk, E. Vityaev. Data Mining in Finance: Advances in Relational and Hybrid Methods. Kluwer Academic Publishers, 2001, 456p.

[11] Vityev E. Semantic Probablistic Inference of Predictions. Izv. Irkutsk. Gos. Univ. Ser. Mat., 2017, vol. 21. (in Russian)