Follow

Had a little epiphany around using large language models locally; how the number of parameters relates to amount of GPU VRAM. The relationship is indirect and exponential so dial-twiddling is fussy. Upshot is larger parameter models working reasonably well on a laptop with an NVIDIA GPU. No smoke.

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.