Innovating for the Future of Marketing

Signup for latest in R&D from our Applied AI research lab

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Brand AI Generative Capabilities

Enhanced Contextual Personalization of Brand Creatives through the Integration of Brand Attributes in Generative AI Models

 

– Ishaan Bhola, Mukunda NS, P Gaglani, V Singhal, T Kaur, R Krishna, H Ali Khan, H Sehrawat, A Jain, A Nainwal

A curated list of research papers that we are reading.

 

Fine-tuned Language Models can be Continual Learners

Recent work on large language models relies on the intuition that most natural language processing tasks can be described via natural language instructions. Language models trained on these instructions show strong zero-shot performance on several standard datasets… View Paper

Attention Is All You Need

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism…

View Paper

Large Language Models Can Self-Improve

Large Language Models (LLMs) have achieved excellent performances in various tasks. However, fine-tuning an LLM requires extensive supervision. Human, on the other hand, may improve their reasoning abilities by self-thinking without external inputs…

View Paper

Structure and Content-Guided Video Synthesis with Diffusion Models

Text-guided generative diffusion models unlock powerful image creation and editing tools. While these have been extended to video generation, current approaches that edit the content of existing footage while retaining structure require expensive re-training for every input or rely on error-prone propagation of image edits across frames…

View Paper

Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations

Generative language models have improved drastically, and can now produce realistic text outputs that are difficult to distinguish from human-written content. For malicious actors, these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations. This report assesses how language models might change influence operations in the future, and what steps can be taken to mitigate this threat…

View Paper