{"id":563,"date":"2026-01-13T08:16:31","date_gmt":"2026-01-13T08:16:31","guid":{"rendered":"https:\/\/itdd.au.dk\/lab\/?page_id=563"},"modified":"2026-04-06T09:31:52","modified_gmt":"2026-04-06T09:31:52","slug":"workshop-gen-ai","status":"publish","type":"page","link":"https:\/\/itdd.au.dk\/lab\/workshop-gen-ai\/","title":{"rendered":"Workshop 1. Semester &#8211; GenAI\/LLM"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">1. Semester &#8211; Introduktion til Generativ AI og LLM Prompt Workshops <\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">1. Definition af begrebet<\/h3>\n\n\n\n<p><strong>Hvad er et Prompt?<\/strong> Ordet &#8220;prompt&#8221; stammer fra latin <em>promptus<\/em> (parat\/rede) og bruges i it-sammenh\u00e6ng om at give stikord eller instrukser<sup><\/sup>. <strong>Prompt Engineering<\/strong> er processen med at designe og forfine disse instruktioner for at f\u00e5 de bedst mulige resultater fra generative AI-modeller (LLM&#8217;er)<sup><\/sup>.<\/p>\n\n\n\n<p>Selvom nogle eksperter, som Ethan Mollick, p\u00e5peger, at teknisk &#8220;prompt engineering&#8221; bliver mindre vigtigt i fremtiden, fordi modellerne bliver bedre til at g\u00e6tte vores intentioner, er det stadig en essentiel kompetence at kunne guide modellerne pr\u00e6cist.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2. Beskrivelse og Form\u00e5l<\/h3>\n\n\n\n<p><strong>Hvorfor arbejder vi med det?<\/strong> Form\u00e5let er at forst\u00e5 mekanikken bag chatbots, s\u00e5 vi ikke blot bruger dem blindt. En sprogmodel t\u00e6nker ikke; den beregner sandsynligheden for det n\u00e6ste ord baseret p\u00e5 enorme m\u00e6ngder tekst (korpus)<sup><\/sup><sup><\/sup><sup><\/sup>.<\/p>\n\n\n\n<p>Vi fokuserer p\u00e5 de indbyggede begr\u00e6nsninger og risici:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Bias:<\/strong> Modellerne afspejler de data, de er tr\u00e6net p\u00e5, hvilket kan f\u00f8re til sk\u00e6vvredne eller politisk farvede svar.<\/li>\n\n\n\n<li><strong>Ingen sandhed:<\/strong> Sprogmodeller har ingen moral eller evne til at vurdere sandhed. De er tr\u00e6net til at lyde overbevisende, hvilket betyder, at brugeren alene har ansvaret for at faktatjekke.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">3. Hvordan g\u00f8r vi p\u00e5 studiet?<\/h3>\n\n\n\n<p>Vi arbejder praktisk med at &#8220;prompte&#8221; generative AI-v\u00e6rkt\u00f8jer ved at afpr\u00f8ve forskningsbaserede metoder p\u00e5 konkrete opgaver, s\u00e5som eksamenssynopser eller blogindl\u00e6g<sup><\/sup><sup><\/sup><sup><\/sup><sup><\/sup>. Vi benytter s\u00e6rligt tre tilgange:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Few Shot Learning:<\/strong> Vi giver modellen eksempler p\u00e5 korrekte l\u00f8sninger eller det \u00f8nskede layout, f\u00f8r vi stiller opgaven.<\/li>\n\n\n\n<li><strong>Adding Context:<\/strong> Vi tildeler modellen en rolle (fx &#8220;du er ekspert i undervisning&#8221;) og definerer m\u00e5lgruppen for at sk\u00e6rpe svaret.<\/li>\n\n\n\n<li><strong>Chain of Thought:<\/strong> Vi beder modellen l\u00f8se opgaven trin-for-trin (fx f\u00f8rst skitsere, s\u00e5 skrive, s\u00e5 revidere) i stedet for at levere hele svaret p\u00e5 \u00e9n gang.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">4. Teoretiske begreber<\/h3>\n\n\n\n<p>Workshoppen dykker ned i de tekniske og sociotekniske begreber bag AI:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Black Box:<\/strong> Vi ser kun input og output, men den interne kompleksitet (parametre, system prompts) er ofte skjult for brugeren.<\/li>\n\n\n\n<li><strong>Hallucinering:<\/strong> N\u00e5r modellen opfinder fakta for at opfylde sandsynlighedskriterier.<\/li>\n\n\n\n<li><strong>Tokens:<\/strong> M\u00e5den modellen nedbryder tekst til talv\u00e6rdier p\u00e5.<\/li>\n\n\n\n<li><strong>Bias &amp; Guardrails:<\/strong> Hvordan firmaerne bag modellerne fors\u00f8ger at styre (eller overkorrigere) modellens svar gennem &#8220;System Prompts&#8221;.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">5. Litteraturliste<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Latour, B. (1999). <em>Pandora\u2019s hope: Essays on the reality of science studies<\/em>. Harvard University Press.<\/li>\n\n\n\n<li><em>A primer on the inner workings of transformer-based language models<\/em>. (2024). arXiv. <a href=\"https:\/\/arxiv.org\/abs\/2405.00208\">https:\/\/arxiv.org\/abs\/2405.00208<\/a><\/li>\n\n\n\n<li><em>Language models are few-shot learners<\/em>. (2020). arXiv. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2005.14165\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.48550\/arXiv.2005.14165<\/a><\/li>\n\n\n\n<li><em>Chain-of-thought prompting elicits reasoning in large language models<\/em>. (2022). arXiv. <a href=\"https:\/\/arxiv.org\/abs\/2201.11903\">https:\/\/arxiv.org\/abs\/2201.11903<\/a><\/li>\n\n\n\n<li>Mollick, E. (n.d.). <em>Captain\u2019s log: The irreducible weirdness \u2026<\/em> One Useful Thing. <a href=\"https:\/\/www.oneusefulthing.org\/p\/captains-log-the-irreducible-weirdness\">https:\/\/www.oneusefulthing.org\/p\/captains-log-the-irreducible-weirdness<\/a><\/li>\n\n\n\n<li>Mollick, E. (n.d.). <em>On working with wizards<\/em>. One Useful Thing. <a href=\"https:\/\/www.oneusefulthing.org\/p\/on-working-with-wizards\">https:\/\/www.oneusefulthing.org\/p\/on-working-with-wizards<\/a><\/li>\n\n\n\n<li><em>Prompt engineering is dead<\/em>. (n.d.). <em>IEEE Spectrum<\/em>. <a href=\"https:\/\/spectrum.ieee.org\/prompt-engineering-is-dead\">https:\/\/spectrum.ieee.org\/prompt-engineering-is-dead<\/a><\/li>\n\n\n\n<li>IDA. (n.d.). <em>Du kommer til at bruge AI \u2013 men glem hypen om prompt engineering<\/em>. <a href=\"https:\/\/ida.dk\/raad-og-karriere\/ai-kunstig-intelligens\/forsker-du-kommer-til-at-bruge-ai-men-glem-hypen-om-prompt-engineering\">https:\/\/ida.dk\/raad-og-karriere\/ai-kunstig-intelligens\/forsker-du-kommer-til-at-bruge-ai-men-glem-hypen-om-prompt-engineering<\/a><\/li>\n\n\n\n<li><em>The unreasonable effectiveness of eccentric automatic prompts<\/em>. (2024). arXiv. <a href=\"https:\/\/arxiv.org\/pdf\/2402.10949\">https:\/\/arxiv.org\/pdf\/2402.10949<\/a><\/li>\n\n\n\n<li><em>Large language models understand and can be enhanced by emotional stimuli<\/em>. (2023). arXiv. <a href=\"https:\/\/arxiv.org\/abs\/2307.11760\">https:\/\/arxiv.org\/abs\/2307.11760<\/a><\/li>\n\n\n\n<li>Stevens, I. W. (n.d.). <em>Motivating multimodal models: Balancing threats and rewards for enhanced performance<\/em>. Medium. <a href=\"https:\/\/medium.com\/@ingridwickstevens\/motivating-multimodal-models-balancing-threats-and-rewards-for-enhanced-performance-2126e419dac4\">https:\/\/medium.com\/@ingridwickstevens\/motivating-multimodal-models-balancing-threats-and-rewards-for-enhanced-performance-2126e419dac4<\/a><\/li>\n\n\n\n<li><em>Anthropic\u2019s Claude vulnerable to emotional \u2026<\/em> (2024, October 12). <em>The Register<\/em>. <a href=\"https:\/\/www.theregister.com\/2024\/10\/12\/anthropics_claude_vulnerable_to_emotional\/\">https:\/\/www.theregister.com\/2024\/10\/12\/anthropics_claude_vulnerable_to_emotional\/<\/a><\/li>\n\n\n\n<li><em>How Google tells you what you want to hear<\/em>. (2024, October 31). <em>BBC Future<\/em>. <a href=\"https:\/\/www.bbc.com\/future\/article\/20241031-how-google-tells-you-what-you-want-to-hear\">https:\/\/www.bbc.com\/future\/article\/20241031-how-google-tells-you-what-you-want-to-hear<\/a><\/li>\n\n\n\n<li>xAI. (n.d.). <em>grok4_system_turn_prompt_v8.j2<\/em> [Source code]. GitHub. <a href=\"https:\/\/github.com\/xai-org\/grok-prompts\/blob\/main\/grok4_system_turn_prompt_v8.j2\">https:\/\/github.com\/xai-org\/grok-prompts\/blob\/main\/grok4_system_turn_prompt_v8.j2<\/a><\/li>\n\n\n\n<li>Ordbog over det danske sprog. (n.d.). <em>prompt<\/em> [Ordbogsopslag]. <a href=\"https:\/\/ordnet.dk\/ddo\/ordbog?query=prompt\">https:\/\/ordnet.dk\/ddo\/ordbog?query=prompt<\/a><\/li>\n\n\n\n<li>CFU. (2025, March 5). <em>SkoleGPT sk\u00e6rmer nu bedre elever for problematiske emner<\/em>. <a href=\"https:\/\/cfu.dk\/2025\/03\/05\/skolegpt-skaermer-nu-bedre-elever-for-problematiske-emner\">https:\/\/cfu.dk\/2025\/03\/05\/skolegpt-skaermer-nu-bedre-elever-for-problematiske-emner<\/a><\/li>\n\n\n\n<li>Promptfoo. (n.d.). <em>Gemma-3-27B: Model report<\/em>. <a href=\"https:\/\/www.promptfoo.dev\/models\/reports\/gemma-3-27b\">https:\/\/www.promptfoo.dev\/models\/reports\/gemma-3-27b<\/a><\/li>\n\n\n\n<li>SkoleGPT. (n.d.). <em>SkoleGPT<\/em> [Website]. <a href=\"https:\/\/skolegpt.dk\">https:\/\/skolegpt.dk<\/a><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">6. V\u00e6rkt\u00f8jer + Links til vejledninger<\/h3>\n\n\n\n<p>Vi diskuterer forskellen p\u00e5 betalte og gratis modeller, samt sikker brug:<\/p>\n\n\n\n<p><strong>Sikkerhed:<\/strong> Vi tester gr\u00e6nserne for modellernes sikkerhedsfiltre (Jailbreaking) for at forst\u00e5, hvordan &#8220;Guardrails&#8221; fungerer.<\/p>\n\n\n\n<p><strong><a href=\"https:\/\/skolegpt.dk\" data-type=\"link\" data-id=\"https:\/\/skolegpt.dk\">SkoleGPT<\/a>:<\/strong> En &#8220;sandkasse&#8221;-model udviklet til undervisningsbrug, der er GDPR-sikker og ikke gemmer data. Kan i teorien bruges til at eksperimentere trygt sammen med b\u00f8rn.<\/p>\n\n\n\n<p><strong>Betalte vs. Gratis modeller:<\/strong> Gratis modeller har ofte mindre hukommelse og er en generation bagud, hvilket p\u00e5virker kvaliteten af svarene.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Vejledninger:<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/itdd.au.dk\/lab\/tag\/ai\/\" data-type=\"post_tag\" data-id=\"10\">Generative sprogmodeller<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>1. Semester &#8211; Introduktion til Generativ AI og LLM Prompt Workshops 1. Definition af begrebet Hvad er et Prompt? Ordet &#8220;prompt&#8221; stammer fra latin promptus (parat\/rede) og bruges i it-sammenh\u00e6ng om at give stikord eller instrukser. Prompt Engineering er processen med at designe og forfine disse instruktioner for at f\u00e5 de bedst mulige resultater fra [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-563","page","type-page","status-publish","hentry","entry"],"_links":{"self":[{"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/pages\/563","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/comments?post=563"}],"version-history":[{"count":9,"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/pages\/563\/revisions"}],"predecessor-version":[{"id":762,"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/pages\/563\/revisions\/762"}],"wp:attachment":[{"href":"https:\/\/itdd.au.dk\/lab\/wp-json\/wp\/v2\/media?parent=563"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}