Utilisateur
Normative theory:
not covered in law, limited in JS. therefore don't have a good theory of how external AI impacts media freedom. I will identify how it does so (control, homogenisation, power).
I will for the first time evaluate how existing EU law can actually address this threat and therefore how states can comply with tehir positive obligation.
theory: built on assumption that in newsrooms editorial decsiion-mkaing is controlled by editors. Will show how that assumption is false and what implications it has for our udnerstnading of media freedom
So far, two strands: looked at internal control over tools built themselves and responsibiltieis of paltforms. now will combine those to create new normative framework
will also expand to areas that I nor anyone has considered in media law, data act/cloud, non financial resources (media action plan and data space)
short term: how can media use law to gain control over external AI
long term: makes ure regulators live up to their positive obligation, revision EMFA
?
adapt framework to account for infrastructural capture.
show concretely what measures states can take to comply with their positive obligation.
show why it's problematic rom an FR perspective: homogenisation/diversity, opinion power.
narrow down what ifnrastructure matters
media freedom stronger FR basis, brings in positive obligation
Yes, it can have value for media.
not the focus. I will in WP2c show what a more positive AI could look like. will mostly look at conditions that need to be in palce for that postiive potential to be realised.
no. genAI is a good usecase because complexity and utility make it attractive to use but hard to control. but same facors are present in other AI
won't > look at overarching onditions for any AI tool to be used in line with media freedom.
Oster, functional approach > centers on contribution to public discourse.
have to create enabling environment, ultimate guarantors of plurlaism. to make concrete, see emfa. yet, needs to be much broader
capture is built on assumption that external party intentionally tries to assume power over media so they can use them for their purpose.
that's not always problem > opinion power
that's not only problem > homogenisation
small and large because one theme in the lit is that smaller media orgs face significant challenges.
people making strategic decisions onw hether to use AI. depends on what part of the stack we're talking about who's invovled. mostly editors in combination with technologists
narrow: udner what conditions, concretely, does this constrain editorial decision-making. to lay basis or elgal analysis to address those conditions
speed of development. built on moment at end of WP1 to reassess, have strong connections with media orgs and leading scholars.
dataset: depends on collaboration of media partner. if impossible, work with media partner already publishing a dataset to either enrich it or evaluate constraints to publishing these more repsonsible resources.
(1) prevent more of editorial process from being according to big tech logics
(2) need to fundamentally rethink what positive oblgiation to safeguard media freedom looks like in context of AI
need indepdent source of informaiton on how states should safeguard media freedom.