Google has announced a major upgrade to its LiteRT framework, a modern on-device inference engine that evolved from TensorFlow Lite. The new version introduces sophisticated hardware acceleration capabilities, making it faster and more versatile for a wide range of devices. This move aims to improve performance for apps that rely on AI models running locally










