BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//DATE2024//date-conference.com//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTART;TZID=Europe/Madrid:20240326T163000
DTEND;TZID=Europe/Madrid:20240326T180000
LOCATION:Break-Out Room S1+2
CREATED:20240322T074734
DTSTAMP:20240322T074734
SUMMARY:ET01 Embedded Tutorial: Using Generative AI for Next-generation EDA
URL;VALUE=URI:https://date24.date-conference.com/programme#ET01
BEGIN:VALARM
TRIGGER:-PT15M
ACTION:DISPLAY
DESCRIPTION:Reminder
END:VALARM
DESCRIPTION:Get the latest session information at 
	https://date24.date-conference.com/programme#ET01\n\n\nMotivation:	There 
	are increasing demands for integrated circuits but a shortage of designers 
	- Cadence’s blog reports a shortage of 67,000 employees in the US alone. 
	These increasing pressures alongside shifts to smaller nodes and more 
	complexity lead to buggy designs and slow time-to-market. State-of-art 
	Generative AI tools like GPT-4 and Bard have shown promising capabilities 
	in automatic generation of register transfer level (RTL) code, assertions, 
	and testbenches, and in bug/Trojan detection. Such models can be further 
	specialized for hardware tasks by fine-tuning on open-source datasets. As 
	Generative AI solutions find increasing adoption in the EDA flow, there is 
	a need for training EDA experts on using, training and fine-tuning such 
	models in the hardware context.\n	Intended audience:	Students, academics, 
	and practitioners in EDA/VLSI/FPGA and Security\n	Objectives	In this 
	tutorial we will show the audience how one can use current capabilities in 
	generative AI (e.g. ChatGPT) to accelerate hardware design tasks. We will 
	explore how it can be used with both closed and open-source tooling, and 
	how you can also train your own language models and produce designs in a 
	fully open-source manner. We'll discuss how commercial operators are 
	beginning to make moves in this space (GitHub Copilot, Cadence JedAI) and 
	reflect on the consequences of this in education and industry (will our 
	designs become buggier? Will our graduating VLSI students know less?). 
	We'll cover all of this using a representative suite of examples both 
	simple (basic shift registers) to complex (AXI bus components and 
	microprocessor designs).\n	Abstract	There are ever-increasing demands on 
	complexity and production timelines for integrated circuits. This puts 
	pressure on chip designers and design processes, and ultimately results in 
	buggy designs with potentially exploitable mistakes. When computer chips 
	underpin every part of modern life, enabling everything from your cell 
	phone to your car, traffic lights to pacemakers, coffee machines to 
	wireless headphones, then mistakes have significant consequences. This 
	unfortunate combination of demand and increasing difficulty has resulted 
	in shortages of qualified engineers, with some reports indicating that 
	there are 67,000 jobs in the field yet unfilled.\n	\n	Fortunately, there 
	is a path forward. For decades, the Electronic Design Automation (EDA) 
	field has applied the ever-increasing capabilities from the domains of 
	machine learning and artificial intelligence to steps throughout the chip 
	design flow. Steps from layouts, power and performance analysis and 
	estimation, and physical design are all improved by programs taught rather 
	than programmed.\n	\n	In this tutorial we will explore what's coming next: 
	EDA applications from the newest type of artificial intelligence, 
	generative pre-trained transformers (GPTs), also known as Large Language 
	Models. We will show how models like the popular ChatGPT can be applied to 
	tasks such as writing HDL, searching for and repairing bugs, and even 
	applying itself to the production of complex debugging tasks like 
	producing assertions. Rather than constrain oneself just to commercial and 
	closed-source tooling, we'll also show how you can train your own language 
	models and produce designs in a fully open-source manner. We'll discuss 
	how commercial operators are beginning to make moves in this space (GitHub 
	Copilot, Cadence JedAI) and reflect on the consequences of this in 
	education and industry (will our designs become buggier? Will our 
	graduating VLSI students know less?). We'll cover all of this using a 
	representative suite of examples both simple (basic shift registers) to 
	complex (AXI bus components and microprocessor designs).\n	\n	Necessary 
	background	Experience with EDA flows and softwares such as Xilinx Vivado, 
	Yosys, iverilog, etc. will be helpful but is not required as training on 
	the day will be provided.\n	References:\n	(Tutorial presenters in 
	bold)\n	\n	S. Thakur, B. Ahmad, Z. Fan, H. Pearce, B. Tan, R. Karri, B. 
	Dolan Gavitt, S. Garg , "Benchmarking Large Language Models for Automated 
	Verilog RTL Code Generation," 2023 Design, Automation & Test in Europe 
	Conference & Exhibition (DATE), Antwerp, Belgium, 2023, pp. 1-6, doi: 
	10.23919/DATE56975.2023.10137086.\n	\n	J. Blocklove, S. Garg., R. Karri, 
	H. Pearce, “Chip-Chat: Challenges and Opportunities in Conversational 
	Hardware Design,” 2023 Machine Learning in CAD Workshop (MLCAD),. 
	Preprint: https://arxiv.org/abs/2305.13243\n	\n	H. Pearce, B. Tan, B. 
	Ahmad, R. Karri and B. Dolan-Gavitt, "Examining Zero-Shot Vulnerability 
	Repair with Large Language Models," 2023 IEEE Symposium on Security and 
	Privacy (SP), San Francisco, CA, USA, 2023, pp. 2339-2356, doi: 
	10.1109/SP46215.2023.10179324.\n	\n	B. Ahmad, S. Thakur, B. Tan, R. Karri, 
	H. Pearce, “Fixing Hardware Security Bugs with Large Language Models,” 
	under review. Preprint: https://arxiv.org/abs/2302.01215\n	\n	R. Kande, H. 
	Pearce, B. Tan, B. Dolan-Gavitt, S. Thakur, R. Karri, J. Rajendran, 
	“LLM-assisted Generation of Hardware Assertions,” under review. 
	Preprint: https://arxiv.org/abs/2306.14027\n	\n	On the day:\n	\n	Hands on 
	session\n	\n	Content: Audience members will use the language models to 
	achieve various tasks within a simple EDA environment focused on 
	simulation.\n	\n	Goals: While we will also demo approaches using more 
	complex software, the hands-on session will focus on the use of iverilog, 
	which is a simple, free, and open-source software for simulation of 
	Verilog designs. iverilog is not demanding (it can be run on local 
	machines/laptops) and is compatible with windows, Linux, and 
	mac.\n	\n	Pre-requisites: While it is preferable for participants to have 
	installed gcc, build-essential, iverilog, and gtkwave in advance, doing so 
	on the day is not difficult and we can provide guidance at the beginning 
	of the session.\n	\n	Tutorial material	Reference material on the 
	pre-requisites and the manuscripts from the listed references.\n	Tutorial 
	plan\n	\n	0-15 mins: Introduction and motivation by Hammond Pearce, Ramesh 
	Karri, Siddharth Garg, and Jason Blocklove (presenter TBD)\n	\n	15-35 
	mins: Hands-on Chip-chat - using ChatGPT for writing, simulating, and 
	bug-fixing Verilog by Jason Blocklove and Hammond Pearce (participants 
	will be provided with scripts that they can adapt to interact with ChatGPT 
	for their own tools)\n	\n	35-60 mins: Hands-on VeriGen: Developing 
	Open-source EDA datasets and models by Shailja Thakur and Jason 
	Blocklove\n	\n	60-80 mins: AI for Bug Detection: Accelerating hardware 
	fuzzing and flagging bugs and Trojans with Generative AI by Benjamin Tan 
	and JV Rajendran\n	\n	80-90 mins: Gazing into the Crystal Ball: The future 
	of EDA with Generative AI by Siddharth Garg and Ramesh Karri
X-ALT-DESC;FMTTYPE=text/html:<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"><HTML><HEAD><META 
	NAME="Generator" CONTENT="MS Exchange Server version 
	16.0.17231.20290"><TITLE></TITLE></HEAD><BODY><p>Get the latest session 
	information at <a 
	href="https://date24.date-conference.com/programme#ET01">https://date24.date-conference.com/programme#ET01</a></p><table>	<colgroup>		<col 
	span="1" style="width: 25%;" />		<col span="1" style="width: 75%;" 
	/>	</colgroup>	<tbody>		<tr>			<th>Motivation:</th>			<td>There are 
	increasing demands for integrated circuits but a shortage of designers - 
	Cadence’s blog reports a shortage of 67,000 employees in the US alone. 
	These increasing pressures alongside shifts to smaller nodes and more 
	complexity lead to buggy designs and slow time-to-market. State-of-art 
	Generative AI tools like GPT-4 and Bard have shown promising capabilities 
	in automatic generation of register transfer level (RTL) code, assertions, 
	and testbenches, and in bug/Trojan detection. Such models can be further 
	specialized for hardware tasks by fine-tuning on open-source datasets. As 
	Generative AI solutions find increasing adoption in the EDA flow, there is 
	a need for training EDA experts on using, training and fine-tuning such 
	models in the hardware context.</td>		</tr>		<tr>			<th>Intended 
	audience:</th>			<td>Students, academics, and practitioners in 
	EDA/VLSI/FPGA and 
	Security</td>		</tr>		<tr>			<th>Objectives</th>			<td>In this tutorial we 
	will show the audience how one can use current capabilities in generative 
	AI (e.g. ChatGPT) to accelerate hardware design tasks. We will explore how 
	it can be used with both closed and open-source tooling, and how you can 
	also train your own language models and produce designs in a fully 
	open-source manner. We'll discuss how commercial operators are beginning 
	to make moves in this space (GitHub Copilot, Cadence JedAI) and reflect on 
	the consequences of this in education and industry (will our designs 
	become buggier? Will our graduating VLSI students know less?). We'll cover 
	all of this using a representative suite of examples both simple (basic 
	shift registers) to complex (AXI bus components and microprocessor 
	designs).</td>		</tr>		<tr>			<th>Abstract</th>			<td>There are 
	ever-increasing demands on complexity and production timelines for 
	integrated circuits. This puts pressure on chip designers and design 
	processes, and ultimately results in buggy designs with potentially 
	exploitable mistakes. When computer chips underpin every part of modern 
	life, enabling everything from your cell phone to your car, traffic lights 
	to pacemakers, coffee machines to wireless headphones, then mistakes have 
	significant consequences. This unfortunate combination of demand and 
	increasing difficulty has resulted in shortages of qualified engineers, 
	with some reports indicating that there are 67,000 jobs in the field yet 
	unfilled.			<p>&nbsp;</p>			<p>Fortunately, there is a path forward. For 
	decades, the Electronic Design Automation (EDA) field has applied the 
	ever-increasing capabilities from the domains of machine learning and 
	artificial intelligence to steps throughout the chip design flow. Steps 
	from layouts, power and performance analysis and estimation, and physical 
	design are all improved by programs taught rather than 
	programmed.</p>			<p>In this tutorial we will explore what's coming next: 
	EDA applications from the newest type of artificial intelligence, 
	generative pre-trained transformers (GPTs), also known as Large Language 
	Models. We will show how models like the popular ChatGPT can be applied to 
	tasks such as writing HDL, searching for and repairing bugs, and even 
	applying itself to the production of complex debugging tasks like 
	producing assertions. Rather than constrain oneself just to commercial and 
	closed-source tooling, we'll also show how you can train your own language 
	models and produce designs in a fully open-source manner. We'll discuss 
	how commercial operators are beginning to make moves in this space (GitHub 
	Copilot, Cadence JedAI) and reflect on the consequences of this in 
	education and industry (will our designs become buggier? Will our 
	graduating VLSI students know less?). We'll cover all of this using a 
	representative suite of examples both simple (basic shift registers) to 
	complex (AXI bus components and microprocessor 
	designs).</p>			</td>		</tr>		<tr>			<th>Necessary 
	background</th>			<td>Experience with EDA flows and softwares such as 
	Xilinx Vivado, Yosys, iverilog, etc. will be helpful but is not required 
	as training on the day will be 
	provided.</td>		</tr>		<tr>			<th>References:<br />			(Tutorial presenters 
	in bold)</th>			<td>			<p><strong>S. Thakur</strong>, B. Ahmad, Z. Fan, 
	<strong>H. Pearce, B. Tan, R. Karri</strong>, B. Dolan Gavitt, <strong>S. 
	Garg</strong> , "Benchmarking Large Language Models for Automated Verilog 
	RTL Code Generation," 2023 Design, Automation &amp; Test in Europe 
	Conference &amp; Exhibition (DATE), Antwerp, Belgium, 2023, pp. 1-6, doi: 
	10.23919/DATE56975.2023.10137086.</p>			<p><strong>J. Blocklove, S. Garg., 
	R. Karri, H. Pearce</strong>, “Chip-Chat: Challenges and Opportunities 
	in Conversational Hardware Design,” 2023 Machine Learning in CAD 
	Workshop (MLCAD),. Preprint: <a 
	href="https://arxiv.org/abs/2305.13243">https://arxiv.org/abs/2305.13243</a></p>			<p><strong>H. 
	Pearce, B. Tan</strong>, B. Ahmad, <strong>R. Karri</strong> and B. 
	Dolan-Gavitt, "Examining Zero-Shot Vulnerability Repair with Large 
	Language Models," 2023 IEEE Symposium on Security and Privacy (SP), San 
	Francisco, CA, USA, 2023, pp. 2339-2356, doi: 
	10.1109/SP46215.2023.10179324.</p>			<p>B. Ahmad, <strong>S. Thakur, B. 
	Tan, R. Karri, H. Pearce</strong>, “Fixing Hardware Security Bugs with 
	Large Language Models,” under review. Preprint: <a 
	href="https://arxiv.org/abs/2302.01215">https://arxiv.org/abs/2302.01215</a></p>			<p>R. 
	Kande, <strong>H. Pearce, B. Tan</strong>, B. Dolan-Gavitt, <strong>S. 
	Thakur, R. Karri, J. Rajendran</strong>, “LLM-assisted Generation of 
	Hardware Assertions,” under review. Preprint: <a 
	href="https://arxiv.org/abs/2306.14027">https://arxiv.org/abs/2306.14027</a></p>			</td>		</tr>	</tbody></table></div><h3>On 
	the day:</h3><div><table>	<colgroup>		<col span="1" style="width: 25%;" 
	/>		<col span="1" style="width: 75%;" 
	/>	</colgroup>	<tbody>		<tr>			<th>Hands on 
	session</th>			<td>			<p>Content: Audience members will use the language 
	models to achieve various tasks within a simple EDA environment focused on 
	simulation.</p>			<p>Goals: While we will also demo approaches using more 
	complex software, the hands-on session will focus on the use of iverilog, 
	which is a simple, free, and open-source software for simulation of 
	Verilog designs. iverilog is not demanding (it can be run on local 
	machines/laptops) and is compatible with windows, Linux, and 
	mac.</p>			<p>Pre-requisites: While it is preferable for participants to 
	have installed gcc, build-essential, iverilog, and gtkwave in advance, 
	doing so on the day is not difficult and we can provide guidance at the 
	beginning of the session.</p>			</td>		</tr>		<tr>			<th>Tutorial 
	material</th>			<td>Reference material on the pre-requisites and the 
	manuscripts from the listed references.</td>		</tr>		<tr>			<th>Tutorial 
	plan</th>			<td>			<p>0-15 mins: Introduction and motivation by Hammond 
	Pearce, Ramesh Karri, Siddharth Garg, and Jason Blocklove (presenter 
	TBD)</p>			<p>15-35 mins: Hands-on Chip-chat - using ChatGPT for writing, 
	simulating, and bug-fixing Verilog by Jason Blocklove and Hammond Pearce 
	(participants will be provided with scripts that they can adapt to 
	interact with ChatGPT for their own tools)</p>			<p>35-60 mins: Hands-on 
	VeriGen: Developing Open-source EDA datasets and models by Shailja Thakur 
	and Jason Blocklove</p>			<p>60-80 mins: AI for Bug Detection: 
	Accelerating hardware fuzzing and flagging bugs and Trojans with 
	Generative AI by Benjamin Tan and JV Rajendran</p>			<p>80-90 mins: Gazing 
	into the Crystal Ball: The future of EDA with Generative AI by Siddharth 
	Garg and Ramesh Karri</p>			</td>		</tr>	</tbody></table></BODY></HTML>
UID:DATE-ET01-20240326T163000-20240326T180000
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Madrid
TZURL:http://tzurl.org/zoneinfo/Europe/Madrid
X-LIC-LOCATION:Europe/Madrid
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19810329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
BEGIN:STANDARD
TZOFFSETFROM:-001444
TZOFFSETTO:+0000
TZNAME:WET
DTSTART:19010101T000000
RDATE:19010101T000000
END:STANDARD
END:VTIMEZONE
END:VCALENDAR