Core Technology

A radically new way to learn about individuals in emotional and cultural contexts.

Complex Contextual Understanding with High Dimensionality

RCML understands not only individuals, but also their differences and the relationships among them. Our technology combines data from multiple sources to develop complex, contextual understandings based on the relationships between people, places, and ideas.

Individual-specific Models and Explainable Relationship Patterns

RCML learns from the relationships between signals, not by fitting a predetermined model. Unlike traditional machine learning, RCML builds many internally consistent views of the world and associates scientific meaning to the relationships it discovers. We don’t need to impose meaning; we  let the signals speak for themselves.

Edge Processing for Speed and Security

Our vector run-time cycle uses a computationally efficient clustering algorithm to run on the edge device itself. This allows RCML to be faster and more secure, decentralized and personalized.

Key Features

RCML understands individuals specifically by mapping relationships between what they say, do, and feel over time and in different contexts. A mapping uses signals from multiple sources to create a complex cultural and emotional profile, or benchmark, for each person. Each profile exhibits an internally consistent view of the world, which allows for comparisons among individuals, groups, or concepts in order to understand emotional and cultural differences.  

 

Emotional Cultural Intelligence Machine Learning

How It Works

Layered Vector Cluster Pattern with Trim (LVCPT) starts by creating layers of associations between signals and meaning. The highest layer, the Global Vector Layer (GloVe), extracts emotion, opinion, and action signals and associates them with higher level concepts, entities, or meaning. It is similar to Stanford’s Global Vector Word Representation, which understands the universal meaning of words, but goes several steps further by (1) processing multiple signals instead of just words and (2) allowing for multiple views of the same item within a single layer.

All other layers are Local Vector Layers (LoVe) which represent clusters of locally related items, such as a single concept or individual. The global layer’s universality means that it can help inform our understanding of the relationships within the local layer. That’s what makes this process so powerful and why our AI offers significant improvement to other AI: the multi-dimensionality of the global layer means we can use many different signals to build complex contextual understanding, and then use this contextual understanding to discover more accurate, less costly associations at the local layer.

Leaps Ahead of Common AI

Common AI uses Euclidean transformations, limiting the number of signals that can be processed and restricting the world view to a single perspective. This type of Euclidean AI is unable to summarize higher level concepts.

By contrast, RCML can handle signals from multiple sources and remember multiple perspectives and layers of meaning because it relies on hyperbolic 3-manifold geometry with geometric associative memory. RCML’s layered vectorization approach means our AI learns not just from the data we give it at the local layer, but also from known higher level ideas and concepts that we apply at the global layer. This is how we’re able to improves common AI with corrective feedback to over 90% accuracy – our AI processes more data with higher dimensionality and identifies non-linear, complex relationships.  

 

 

RCML is mostly unsupervised machine learning. We say “mostly unsupervised” because, like a car engine, it needs a starter to get things moving but then operates on its own. The “starter” is guidance from sparse explicit or implicit summaries of higher level concepts and meaning, expressed as position coordinates within a hyperbolic manifold. Using this guidance, we apply geometric associative memory to classify non-linear signal patterns and associate meaning to them.

How It Works

We map items – for example, concepts, people, and things – to a geometric hyperbolic manifold. Our algorithm defines the manifold with a cluster of signals – for example, a spoken word, a voice emotion, and an action. It then samples uniformly at random from the hyperbolic manifold space to discover a manifold associated with the item.

Using hyperbolic lenses, RCML measures similarities and differences between manifolds, which are just the cultural differences between items, such as similarity among people within a community or the difference in understanding between two cultural perspectives.

What makes RCML even more powerful is the use of knot sequencing. Knot sequencing allows us to match contemporaneous signals to a single temporal point in the hyperbolic manifold space.  That means that at any single point in time, we can view millions of multi-modal signals and discover the non-linear relationships of an individual’s cultural, political, and scientific belief systems.

Not only can we map a single point in time, but we can also map a sequence of temporal points, represented as a collection of manifolds. This sequencing effect is similar to that of video sequences created from picture frames taken every millisecond and assembled in temporal order. Where it differs is that our sequences succinctly summarize much of the qubit-level structure and context of a given moment (for example, audio (“That’s so cool!”), image (excited face, hand pointing at rocket launch), emotion (high positive excitement and arousal), and physiological data (heart rate). RCML has the flexibility to apply hyperbolic manifold learning algorithms to these layers of increasingly dimensional combinations of points and sequences.

Decentralized Edge Processing Artificial Intelligence

Ipvive’s unsupervised graph sparsification uses decentralization to improve the efficiency of edge processing and secure personalization.  With billions of people constantly producing more and more data, decentralized processing with embedded-A.I. in IoT devices (“edge processing”) can help avoid highly risky and expensive centralized data processing. Our vector run-time cycle operates on the edge, which means we can leave data on the device instead of moving it to our servers. This means faster results and more data privacy.  In addition, we use unsupervised graph management at the individual and group levels to securely check identity, provide value in exchange for data, and improve the speed and efficiency of transactions by using decentralized ledger technology.

Interested in collaborating with us?