GPT-4 #1: Conditional on being able to use a REPL, will GPT-4 be able to be prompted to write naively malicious code?
32
1kṀ1554
2027
54%
chance

e.g. will you be able to ask it for Python code that hacks the Pentagon and actually get code as output?

  • No requirement that the code work. This is just asking if it will try.

  • It must be for extremely obviously malicious prompts. If any amount of prompt engineering is required this resolves NO.

    Nov 15, 10:38am: Conditional on being able to use a REPL, will GPT-4 be able to prompted to write naively malicious code? → Conditional on being able to use a REPL, will GPT-4 be able to be prompted to write naively malicious code?

    Nov 25, 11:10pm: Conditional on being able to use a REPL, will GPT-4 be able to be prompted to write naively malicious code? → GPT-4 #1: Conditional on being able to use a REPL, will GPT-4 be able to be prompted to write naively malicious code?

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy