It is the first pocket-sized device in AI supercomputing and can also run up to 120 billion parameter large language model (LLM) completely on-device, without the need for cloud connectivity, servers ...