Algebraic value edits work well in language models and are composable

Mini

12

Ṁ170resolved May 13

Resolved

YES1D

1W

1M

ALL

Resolves according to follow-up post.

Slightly different formulation to replace the conditional element.

Get Ṁ600 play money

# 🏅 Top traders

# | Name | Total profit |
---|---|---|

1 | Ṁ14 | |

2 | Ṁ5 | |

3 | Ṁ4 | |

4 | Ṁ4 | |

5 | Ṁ0 |

## Related questions

Eliezer Yudkowsky is impressed by a machine learning model, and believes that the model may be very helpful for alignment research, by the end of 2026

29% chance

Will any language model trained without large number arithmetic be able to generalize to large number arithmetic by 2026?

54% chance

Will the first AGI be a large language model?

25% chance

What will be the most common word we use for processing text with large language models?

A new image generation system comes out which has accurate text generation, and interest in it surpasses Stable diffusion and Midjourney

70% chance

are LLMs easy to align because unsupervised learning imbues them with an ontology where human values are easy to express

32% chance

By 2025, GPTs are proven to be able to infer scientific principles from linguistic data.

37% chance

Within 4 years, finetuning will become increasingly less relevant in favor of in context learning

73% chance

Geometric Deep Learning

Ṁ366