• Subscribe
  • All activity
    Shivendra Soni
    As AI applications gain traction, the costs and latency of using large language models (LLMs) can escalate. VectorCache addresses these issues by caching LLM responses based on semantic similarity, thereby reducing both costs and response times.
    Vector Cache
    Vector Cache
    A Python Library for Efficient LLM Query Caching
    Shivendra Soni
    Shivendra Soni
    left a comment
    Does it only work for US stocks? Any idea how I can adapt this for indian stock market?
    Composer
    Composer
    Build, backtest and execute trading algorithms with AI
    Shivendra Soni
    Shivendra Soni
    left a comment
    Has this been disabled now ?
    Company in a Box
    Startup idea to leads in one click with GPT-3
    Shivendra Soni
    Shivendra Soni
    left a comment
    This is so soothing :D
    Drive & Listen
    Drive & Listen
    Drive around cities while listening to their local radios