Eloquent Algorithms has Rebranded to Cura Learning, new website/domain will follow
Case Studies
Scene Understanding with Deep Computer Vision Models & Early Childhood Brain Development
Using CLIP (Constrastive Language-Image Pre-training) a dual-encoder architecture, Attention Flow Joint attention a third-person scene understanding convolutional neural network model, DPT (Dense Prediction Transformer) a monocular depth estimation model, and custom 3D geometry. Our team was able to quantitatively understand healthy brain development instance.
We had the privilege of presenting this work at the Institute of Human Development at UC-Berkeley.
Developed Novel A/B Testing Framework to Incorporate Text Data using Text Embeddings
Variable Ratio Matching with Embeddings (VRM-E) an A/B testing framework with 3 hyperparameters used to adjust for text based confounders for matching estimators. Our method is preferred in 70% of cases
We are privileged to announce that this work has been published to The 2023 Findings of the Association for Computational Linguistics (#1 Ranked Natural Language Processing Machine Learning Conference)
Sentence Classification with Large Language Models (LLMs)
Using BERT (Bidirectional Encoder Representations from Transformers) with feature engineering using NLTK (Natural Language Toolkit) we were able to train a custom LLM to with 19% accuracy improvement over classical machine learning models on question labeling task achieving 94% accuracy
Vehicle Routing Problem with Combinatorial Optimization & Interger Programming
Scaled the capacity of client's optimization algorithm by 15X order volume for optimal delivery by developing custom optimization model from scratch
Developed Custom Object Tracking with Sortation Algorithm
Developed customer object tracking model using YOLOv4 (You Only Look Once) and Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) to track if blocks are placed in the correct order by size.
Research
Our experts are able to produce cutting edge research in the intersection of learning science and machine learning. Experience with the following methods: Large Language Models (LLM), Causal Inference/Design of Experiment, Computer Vision, natural language understanding (NLU) and statistical learning.
Publications and Talks
Zhang et al. (2023), “Causal Confounding Adjustment via Embedding Matching: A Case Study in Estimating the Causal Effects of Peer Review Policies”, Association of Computational Linguistics
Zhao, P., Zhang, R.Z. (2023), “Joint Attention, Computer Vision, Foundational Models: How state-of-the-art AI approaches can analyze instances of early childhood development”, IHD Flash Talks
Zhu, X., Chen, B., Avadhanam, R.M., Shui, H. and Zhang, R.Z. (2020), "Reading and connecting: using social annotation in online classes", Information and Learning Sciences
Where our Experts have worked
Stanford Center on Early Childhood
Stanford Autonomous Agents Lab
Stanford Mimir Knowledge Creation Lab
University of Pennsylvania Graduate School of Education Wonder Lab
Domain Expertise
Machine learning (Natural Language Processing, Computer Vision)
Early Childhood Development
Computational Sociology
Program Evaluation
Learning Science
Design of Experiments
OUR STORY
Quality, not quantity
We have made quality of our habit. It’s not something that we just strive for – we live by this principle every day.