The EU AI Act even mentions “alignment with human intent” explicitly, as a key concern for systemic risks. This is in Recital 110 (which defines what are systemic risks and how they may affect society).
I do not think any law has mentioned alignment like this before, so it’s massive already.
Will a lot of the implementation efforts feel “fake”? Oh, 100%. But I’d say that this is why we (this community) should not disengage from it...
I also get that the regulatory landscape in the US is another world entirely (which is what the OP is bringing up).
100% agreed @Charbel-Raphaël.
The EU AI Act even mentions “alignment with human intent” explicitly, as a key concern for systemic risks. This is in Recital 110 (which defines what are systemic risks and how they may affect society).
I do not think any law has mentioned alignment like this before, so it’s massive already.
Will a lot of the implementation efforts feel “fake”? Oh, 100%. But I’d say that this is why we (this community) should not disengage from it...
I also get that the regulatory landscape in the US is another world entirely (which is what the OP is bringing up).