Privacy at Highlight

The reason we built Highlight is so that there would be an intermediary layer between models and your data.

We felt that a world where models talk directly to your OS and extract training data was very scary.

A company should exist that is incentivized to focus on privacy.

Verifyably secure

Our goal is to make our system verifyably secure. And to let you customize it to your own privacy needs.

  • You can use the Electron Debugger to verify every network request coming in and out. If something doesn’t look right, you can always contact us in our Discord.
  • If you do not want to use our servers for suggestions based on your last frame before invoking the recorder, go to settings -> dev mode -> and change the inference location!
  • If you do not want to use our servers for anything, you can clone Highlight Chat and Conversations (both available on our GitHub) and connect them to your own backend.

We never store screen recordings

Nothing recording ever gets uploaded. In fact, it does not even get stored. We discard of every frame after it’s recorded. Audio transcripts can optionally be saved using the Conversattions app, but are auto-deleted (at an interval you control), and stay completely local

There are 2 scenarios in which data gets sent to our servers

  • When you attach or write it when chatting with Highlight Chat
  • If you have “Allow Cloud Transcripts” enabled, which ensures that Highlight keeps working if your local audio model fails

Encryption

Every conversation and attachment with Highlight Chat is encrypted, and even engineers do not have the ability or authority to access conversations. We have an internal escalation process where accessing a single conversation requires sign off from the CEO and CTP (who are the only ones with the decryption keys)

The Future

Our goal is to continue to be the most secure way to interact with LLMs. If you’d like to help us achieve that, join our Discord