• vxx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Don’t you have any security concerns with sending all your code and JIRA tickets to some companies servers? My boss wouldn’t be pleased if I send anything that’s deemed a company secret over unencrypted channels.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      The tool isn’t returning all code, but it is sending code.

      I had discussions with my CTO and security team before integrating Claude code.

      I have to use Gemini in one specific workflow and Gemini had a lot of landlines for how they use your data. Anthropic was easier to understand.

      Anthropic also has some guidance for running Claude Code in a container with firewall and your specified dev tools, it works but that’s not my area of expertise.

      The container doesn’t solve all the issues like using remote servers, but it does let you restrict what files and network requests Claude can access (so e.g. Claude can’t read your env vars or ssh key files).

      I do try local LLMs but they’re not there yet on my machine for most use cases. Gemma 3n is decent if you need small model performance and tool calls, phi4 works but isn’t thinking (the thinking variants are awful), and I’m exploring dream coder and diffusion models. R1 is still one of the best local models but frequently overthinks, even the new release. Context window is the largest limiting factor I find locally.