PyConIE 2018 - Ireland's Python Conference

Sat 10th - Sun 11th November

Sponsors Tickets Keynotes Talks Workshops Entertainment Location Financial Aid

About PyConIE 2018

This year we honour the "Education" theme.
Python is getting an important role in education, from initiation to programming in code dojos, computer science in secondary education, engineering in college, and skill updates of professionals. Now that Ireland is introducing a leaving certificate sitting exam in computer science, we'll hopefully soon see the digital natives come in our community, and we should be ready to welcome them.
We'll also run a number beginners friendly workshops and a career reboot discussion to help delegates drive a career move toward the IT world, where python is now almost omnipresent.
Seasoned developers will also find advanced and experts talks about the latests of Python 3.7, techniques for machine learning and a great selection of fantastic talks. So join us to learn, to teach, to share and enjoy a great week end.

Talks
Talks

The best talks from Irish and International speakers about Python and related technologies.

Workshops
Workshops

Full workshop tracks to up-skill on some new technology.

Networking
Networking

Meet new friends and enjoy Dublin City, enjoy our entertainment events!

Sponsors

Diamond Sponsor
Platinum Sponsors
Bar and Entertainmant
Gold Sponsors
Lanyards Sponsors
Silver Sponsors
Volunteers
Become a sponsor

Entertainment - PyConIE Quiz!

Saturday Night from 7PM. Sponsored by:

8 Rounds- Tech and General Knowledge

Free drinks!

Free pizzas!

Lots of fun!

Advanced Python Workshop

There is also a one-day Advanced Python workshop run by Mike Müller of Python Academy. It takes place on November 9th. This is a great opportunity to lift your Python skills to an advanced level. Full details On this page

400+
attendees

from everywhere around Ireland and the world

30+
speakers

from around the world

2
talk tracks

2 tracks with a mix of core python, data science, web frameworks

2
workshop tracks

With workshops for beginners, advanced, and data science.

Keynote speakers

Dr. Brett Becker
Dr. Brett Becker
Miguel Grinberg
Miguel Grinberg
Dr Suzanne Little
Dr Suzanne Little
Keith Quille
Keith Quille

Talks

Publish a (Perfect) Python Package on PyPI

Mark Smith

Always wanted to publish a package on PyPI, but didn't know where to start? This talk is for you! Starting with nothing, we'll build a package and publish it on PyPI using current best practices. Publishing a package on PyPI used to be a cargo cult. (And often still is!) Instead of copying and pasting from other projects' `setup.py` without fully understanding what's happening, discover how to package your code for PyPI from scratch - then learn how to avoid doing any of this completely! (But now know you'll know what's going on). * _Why_ should you package your code for PyPI? * How to structure your project and your code, including why you need a `src` folder! * Discover what goes in your `Pipfile` and your `setup.py`, and why. Learn the difference between installing your library to use it, and installing it to develop on it. * Write tests for your project, and run them using Tox. * Ensure your code will work in different on different platforms with Continuous Integration! * Document your code so people won't ask you loads of questions! * How to actually get your code on PyPI using Twine. Configure your machine for PyPI and test your package release on the PyPI test server. * Finally, learn how avoid doing any of this yourself (or avoid doing it twice) using CookieCutter templates. By the end of this talk, you'll be so comfortable packaging projects you won't avoid writing `setup.py` files any more! Maybe you'll even start writing new code just so you can publish it on PyPI!

More Than You Ever Wanted To Know About Python Functions

Mark Smith

What exactly _are_ functions? Let's talk about functions, methods, callables and closures - what they are, what you can give them, what they can give you, what you can do with them ... and what's inside. You probably think you already know everything about functions, but you probably don't! **Input & Output**: How do you get things in and out of functions? I'll cover parameters and the myriad of ways they can be specified, provided and accessed - including helpful hints to avoid common mistakes! I'll cover return values, briefly, along with variable scopes and exceptions. **Closures**: What are they, how do they work and how they can affect memory usage. **Methods**: How does a method differ from a function, when are they made, how do they work (where does `self` come from?) and how to access the function inside every method. **\_\_magic\_\_**:Make your own callables from any object! **Introspection**: Using modern Python techniques, what can you find out about a function, and what can you do with that information? **Bytecode**: What happens if you open up a function and look at its insides? Can you change it and put it back together again? (Spoiler: Yes, you can.) By the end of this talk, I guarantee* you'll know more about callables than when you walked in, along with techniques both practical and so extreme your colleagues will never let you merge them to master. (*This guarantee is legally non-binding and cannot be redeemed in any way.)

Functional Programming for Data Science

Neal Ó Riain

Python is a versatile language and it supports a wide variety of programming paradigms. At its heart it's object-oriented, but in this talk I want to discuss how you can use Python to write clean, efficient, and modular functional code. I'll begin by giving a little background on what functional programming is and why you might use it. I'll talk through some of the simple primitives of functional programming, and I'll give some useful examples of functional code for data analysis. The aim is to give a practical and pragmatic introduction to these ideas, covering some of the strengths and weaknesses of Python as a functional language.

Profiling: Find the Squeaky Wheel

Nick Timkovich

Python sometimes gets a bad rap for being a slow language, but slow code can be written in any language. The first step towards accelerating code is identifying where it’s slow: a 100x speed up to something that takes 1% of the time pales in comparison to a 1.1x speed up of something that takes 50% of the time. The Python Standard Library provides a collection of packages to get started--profile, cProfile, and pstats--and the community well goes deeper with interactive visualizations, deterministic vs. statistical, line vs. call stack profilers. In this talk, we will demo how to instrument sample slow code using cProfile, then find the source of the problem using the interactive tool SnakeViz. Another demo will be shown using py-spy, a statistical profiler which can be attached to running processes, requiring no modification to existing code.

Can we talk? Machine Translation with Keras.

Bojan Bozic

In this talk I will describe and present ideas for a machine translation prototype implemented in Keras. I will cover Neural Machine Translation, which is an approach to machine translation that uses a large neural network. It departs from phrase-based statistical approaches that use separately engineered subcomponents. E.g. Google uses Google Neural Machine Translation (GNMT) in preference to its previous statistical methods. NMT has highly promising performance for large training data. The common principle is encoding the meaning of input into concept space and performing translation based on encoding which leads to deeper understanding and learning of translation rules, for better translation than SMT. The problem is the tendency towards overfitting to frequent observations and overlooking special cases. With the cause that the translational function is shared, so high- and low-frequency pairs impact each other by adapting shared parameters. Smoothness of translation function makes infrequent pairs seem like noise.

Domain-specific Languages in Python. The Why and How.

Vladyslav Sitalo

In this talk, I'd like to discuss how and when DSLs can improve your life. I also review a variety of tools and techniques that can help you with creating internal DSLs with Python. --- Domain-specific languages have a long tradition in computer science. Some well-known examples include: RegExp and CRON - these are so-called External DSLs - they define their own separate syntax and usually are not related to any general purpose languages. There also exist other types of DSL - Internal. The most well-known examples of this type come from the Ruby community - RSpec is a popular testing DSLs and Ruby on Rails provides several DSLs useful in the context of Web development. Internal DSLs also have a long tradition with LISP and its macros being the most prominent examples. Despite this proud tradition DSLs never gained a wide adoption as a tool in the belt of most Software Craftsmen. Yet the thought about good software practices evolved over the years in such way as to lead us to write the code that encodes more and more domain knowledge within it. We've learned the importance of the good names to make our code clear and make our intentions transparent. And we've learned that creating good abstractions helps us better model our thought process within the code. The code written by people who take these lessons to heart is easy to understand for the reader. And for a sufficiently large project - most of it tends to be expressed in the terms of the project domain and not in terms of low-level programming language constructs. Arguably when we follow these practices we already create DSLs although we rarely think about our code in those terms. --- In this talk, I'd like to discuss the situations when it can be useful for you to take the next step. And to make the DSLs you're creating more explicit by adding build tools support or employing metaprogramming. I'd be focusing on Internal DSLs as I think it's an easier and more fruitful starting point for exploring ideas in this area. To illustrated ideas in this talk and the thought process that can lead you on the path of DSL development, I will use the OpenSource project - SSM Document Generator (https://github.com/awslabs/aws-systems-manager-document-generator) that I've developed. In this project, I went through several iterations, and my thinking evolved over time from using YAML configurations to define the relevant domain objects to creating a simple Python internal DSL. --- How to build internal DSLs in Python The second part of the talk is dedicating to exploring how you can approach implementing Internal DSLs in Python. I review a variety of approaches, techniques, and tools that can help you with this task: * Creative use of standard syntax constructs like context managers (with), object construction mechanism, etc; * Import-time modification of AST. This is probably one of the most powerful approaches, as it allows you to introduce new syntax or re-interpret existing one. It comes with its own drawbacks though; * Annotations and Metaclasses. As a conclusion - In this talk, I'd like to encourage people to explore ideas around creating internal DSLs and I explore some tools and approaches that can help them on this path. --- Some tools and references: Macropy -  https://github.com/lihaoyi/macropy Python functional pipes -  https://github.com/robinhilliard/pipes Django forms -  https://docs.djangoproject.com/en/2.1/topics/forms/ SSM Document Generator -  https://github.com/awslabs/aws-systems-manager-document-generator Domain-Specific Languages by Martin Fowler -  http://goodreads.com/book/show/8082269

Leaving Certificate Computer Science and Python

Stephen Murphy

This talk is broken into 4 sections: Section 1: The structure of the Leaving Certificate Computer Science pilot scheme Section 2: The structure of the Leaving Certificate CS Specification Section 3: How Python can be applied to the Leaving Certificate CS specification Section 4 : The Computer Science Teachers' Association of Ireland and Python Ireland.

Programming by poking: some experiences in teaching with Python

Ben Fagan

Part of the title originates from a quote by computer scientist Gerry Sussman on the new MIT introduction to programming course, which (in Sussman's view) focused more on building practical applications at the expense of many computer science fundamentals. The talk will look at some of the advantages and pitfalls of the increasing trend of using Python as an introductory programming language: partially through some personal anecdotes and partially through friends and colleagues' experiences. It will also look at the increasing divergence between those involved in software development and those who, educated through Python, are based in other domains as their primary job but are using their new Python-based knowledge to assist their daily work (in finance, insurance etc). I have rated this as Intermediate as some knowledge of Python will be needed as well as understanding some Computer Science basics (i.e arrays vs lists). Some side by side comparisons will take place using Python against Scheme (aspects of fundamental programming and comparing a 'traditional' computer science course re-written in Python), Lua (another contender for when I taught some basic programming) and C (comparison to another popular introductory language and low level details).

Launch Jupyter to the Cloud: an example of using Docker and Terraform

Cheuk Ho

In this talk, we will use a task: hiring a GPU on Google Cloud Platform to train neural network, as an example to show how an application can be deployed on a cloud platform with Docker and Terraform. The goal is to have Jupyter Notebook running in an environment with Tensorflow (GPU version) and other libraries installed on a Google Compute Engine. First we will briefly explain what is Docker and what is Terraform for audiences who has no experience in either or both of them. Some basic concepts of both tools will also be covered. After that, we will walk-through each steps of the work flow, which includes designing and building a Docker image, setting up a pipeline on Github and Docker Hub, writing the Terrafrom code and the start up script, launching an instance. From that, audiences will have an idea of how both tools can be use together to deploy an app onto a cloud platform and what advantages each tool can bring in the process. This talk is for people with no experience in application deployment on cloud service but would benefit form computational reproducibility and cloud service, potentially data scientists/ analysts or tech practitioners who didn’t have a software developing background. We will use an example that is simple but useful in data science to demonstrate basic usage of Docker and Terraform which would be beneficial to beginners who would like to simplify their work flow with those tools.

Autism in the Developer Workplace

Ed Singleton

High functioning autism and near autism are common in STEM workplaces. Diagnoses of autism are increasing, with some surveys estimating 1 in 59 children have signs of autism. Many of us have struggled with autistic characteristics, and many have struggled with the autistic characteristics of colleagues. Many of us have also benfited from these same characteristics. This talk will look at an overview of autism, my own personal journey as a late diagnosis autist person, and will cover methods that autistic people can use to smooth their worklives, as well as methods that non-autistic people can use to cope with their autistic colleagues and ways that they can greatly benefit from their gifts.

High Performance Data Processing in Python

Donald Whyte

The Internet age generates vast amounts of data. Most of this data is unstructured and needs to post processed in some way. Python has become the standard tool for transforming this data into more useable forms. numpy and numba are popular Python libraries for processing large quantities of data. When running complex transformations on large datasets, many developers fall into common pitfalls that kill the performance of these libraries. This talk explains how numpy/numba work under the hood and how they use vectorisation to process large amounts of data extremely quickly. We use these tools to reduce the processing time of a large, real 670GB dataset from one month to 40 minutes, even when the code is run on a single Macbook Pro. Link to video of a similar talk Donald has given on the subject: https://www.youtube.com/watch?v=MKIrRYKJeAc Link to other talks Donald has given on data processing and machine learning: http://donsoft.io/

An Idiots Guide to (Open) Data Science

Andrew Bolster

Setup, configuration, and use of Python Data Science tools, highlighting some of the technical pitfalls / statistical failings people often come across in the cleaning and analysis of data. Focus on using multiple datasets from OpenDataNI to generate insights into economic policy and educational attainment in Northern Ireland.

Visualisation in Python - Quick and easy routes to plotting magic

Shane Lynn

The ability to explore and grasp data structures through quick and intuitive visualisation is a key skill of any data scientist. Different tools in the Python ecosystem required varying levels of mental-gymnastics to manipulate and visualise information during a data exploration session. The array of Python libraries, each with their own idiosyncrasies, available can be daunting for newcomers and data scientists-in-training. In this talk, we will examine the core data visualisation libraries compatible with the popular Pandas data wrangling library. We'll look at the base-level Matplotlib library first, and then show the benefits of the higher-level Pandas visualisation toolkit and the popular Seaborne library. By the end of the talk, you'll be bar plotting, scatter plotting, and line plotting (never pie charting) your way to data visualisation bliss.

Finite State Machines in Python; Or How I learned to stop worrying and love the automaton

Brian Stempin

Finite state machines are usually the thing of nightmares for CS undergrads. The first question any CS student asks after seeing them is "But where will I ever use this?". The answer surprised us too: You can use FSM's almost everywhere. In this talk we will do a recap on Finite State Machines, and show you some examples of where we use them at Telnyx. We will also show the transactions library and how we use this library to process FSMs in a distributed manner. And we end on a small demo of how you can use FSM machines in a real world application.

Adding the three pillars of Observibility to your Python app

Eoin Brazil

This intermediate level talk will focus on introducing the three pillars of Observibility (1: structured logging 2: metrics 3: tracing) to your Python application. The learning objective is to introduce existing Python developers to each area as well as best practices (RED/four golden signals) and the specific Python libraries they can use in their applications. It aim is that by the end people will know how to add specific tools plus related best practices to their existing applications to provide greater insight into their systems. The closest example is that this talk will pragmatically present the content of "Distributed Systems Observibility" O'Reilly into concrete actions and libraries to use. Some anecdotes and examples of how these have gone for the speaker in his production systems will also be noted.

Building a Fine Grained Image Classification System for Nature Images

Fergal Walsh

At Fieldguide we are developing a digital field guide for all species of flora and fauna across the planet. We are using image recognition technology to enable species identification and to help with the curation of this massive catalogue. In this talk I will describe how we are building an image recognition system with the aim of identifying all known species in the natural world. The system has gone through a number of iterations at this point using a variety of computer vision and machine learning techniques, from nearest neighbour search to classification with fine tuned deep convolutional networks. All of this has been implemented in Python using scikit-learn, Numpy, Caffe and Tensorflow. Aside from the obvious machine learning challenges in designing and training such a system we faced numerous technical challenges while implementing and scaling this system in a cost effective manner. I will discuss these challenges, our solutions and the remaining open problems. While this talk will be relatively high level with few code examples and no math (but lots moths), it will be of most interest to those who have some knowledge of machine learning concepts.

Django APIs, Versioning and You

Rebecca Martin

This talk will attempt to explain how to version an API within the Django framework. Say you need to change the data that your API returns to your users, but any major changes would result in breaking the API for users of any previous versions. This talk will explain how to avoid this problem, which as a developer who works on APIs with the Django framework every day is a constant problem that I would have to face. This talk will cover: 1.) Why do I even have to version my API in the first place? Surely I can just make changes? 2.) Okay, now I understand why to version my API in Django. What about the how? (this will mainly focus on the Django REST framework's library of versioning, but other methods will be considered) 3.) Live Demo of breaking changes (oh no!) and how we can apply our knowledge from the second part of the talk and avoid these.

Asynchronous programming in Python, or the art of living backwards

Mikhail Medvedev

Asynchronous programming can have many advantages, but may be awfully complicated. It also requires a developer to think differently. In this talk I will go through what async programming is, when you should or should not use it, and what we can do to avoid getting lost. I will also explore various approaches and tools available in Python.

Natural Language Processing: An Application for Public Policy

Ancil Crayton

In this session, I would like to apply NLP methods to explore press releases by governments in order to understand public policy decisions. This talk will be based on my PhD research in applying topic models to Federal Reserve communication and analyzing how important themes influence financial markets. I will walk through a Jupyter notebook that covers the main steps of text preprocessing, feature extraction, topic modeling (LDA or NMF), visualizing topics, and possibly regression analysis to assess the impact of the information financial markets. It will include mentions to packages like gensim, Scikit Learn, Word Cloud, and possibly statsmodels. I will make this available in a public repo on Github or a notebook on Google Colab that will allow participants to follow along as well. I hope that this session will inspire participants to apply their Python and data science skills to interesting problems lurking in public policy.

What is new in Python 3.7?

Stephane Wirtel

Released in June before the conference, Python 3.7 is a feature-packed release! This talk will cover all the new features of Python 3.7, including the Data Classes and the Context Variables for the asynchronous programming with asyncio.

Python on Windows is Okay, Actually

Steve Dower

Packages that won't install, encodings that don't work, installers that ask too many questions, and having to own a PC are all great reasons to just ignore Windows. Or they would be, if they were true. Despite community perception, more than half of Python usage still happens on Windows, including web development, system administration, and data science, just like on Linux and Mac. And for the most part, Python works the same regardless of what operating system you happen to be using. Still, many library developers will unnecessarily exclude half of their potential audience by not even attempting to be compatible. This session will walk through the things to be aware of when creating cross-platform libraries. From simple things like using pathlib rather than bytestrings, through to all the ways you can get builds and tests running on Windows for free, by the end of this session you will have a checklist of easy tasks for your project that will really enable the whole Python world to benefit from your work.

Prediction of risk factors for stroke using classification algorithms (logit, SVM)

Olga Lyashevska

Atrial fibrillation (AF) is the most common irregular heartbeat among the world’s population and is a major contributing factor to clot formation within the heart. When such a blood clot enters the cardiovascular system, it first must travel along the ascending aorta. The clot may travel along the aortic arch and travel towards the brain through the left and right common carotid arteries. If clot enters these vessels, it can become lodged within the smaller vessels of the brain and cause a stroke. We apply supervised machine learning classifiers (logit, SVM) for detecting stroke probability using simulation data. Various scenarios are implemented to examine the impact of variables such as shape of the aortic arch, varying clot dimensions and the entry point. Model selection tools (grid search, cross-validation) and classification probability are calculated for each classifier. Application will be shown using Jupyter notebooks.

Workshops

Deep Diving into GANs: From Theory to Production

Michele De Simoni

With our accrued experience with GANs, we would like to guide you through the required steps to go from theory to production with this revolutionary technology. Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine. NOTE: this would be an extended version of the workshop we held at EuroScipy 2018 Materials and Detailed description: https://github.com/zurutech/gans-from-theory-to-production

Introduction to Spark Streaming

Franco Galeano Manuel Ignacio

Talk Title: Introduction to Spark Streaming In this workshop assistants will learn how to process streams of data in real time by using spark and python. A series of coding exercises will guide audience across most relevant Spark DStreams features. Assistants will also learn how to integrate spark streams with other spark extensions. Key Features Consume data streams in real time from TCP Servers. Apply data processing techniques such as map reduce to live streams of data. Integrate data stream processing with other Spark extensions. Description Processing big data on real time is a challenging endeavor for different reasons such as scalability difficulties, consistency of the information, tolerance to faults, among others. Apache Spark provides a collection of APIs that can be used to perform general-purpose computation in clustered environments. The aim of this workshop is to bring an introduction to processing of data in real time by using spark. The proposed workshop contains 3 sections, each one of 30 minutes. The remaining time will be used to answer questions and help assistants with the practical exercises. The first section of the workshop provides an introduction to most relevant features of Spark, including Resilient Distributed Datasets (RDDs), SQL and Dataframes. The second section covers how to use Spark Streaming API to consume data streams on real time from TCP sockets. The third section shows how to integrate Spark Streaming with the machine learning Spark extension. Audience This workshop is aimed to software engineers, architects and IT professionals in general with interest in distributed systems and big data analytics. No previous knowledge or experience with spark is required but it will be helpful. Basic python knowledge is expected. Biography Manuel Ignacio Franco Galeano, computer scientist from Colombia. He works for Fender musical instruments as lead engineer in Dublin, Ireland. He holds a MSC in computer science from University College Dublin UCD. His areas of interest and research are music information retrieval, data analytics, distributed systems, blockchain technologies among others.

Introducing Python Programming for Data Analysis

Michelle Almeida

This talk will address how to introduce Python Programming to broaden Software developers skills to handle large Data sets. You will be introduced to introduced to Open source tools and data sets to practice Python programming, including Anaconda and WEKA. You will also learn where to source large data sets and how to program in Python. There will be a Demo and walkthrough of extract, load and transform (ELT) of a big data set with examples of Python coding as well as tips of how to get started and where to go for further information.

A Quick Offline Trip Through Kubernetes

Steve Holden

This workshop is aimed at Python programmers who aspire to learn about Kubernetes and take advantage of the efficiency and flexibility it offers. We will use a simple Flask application that allows users to create, edit and save Jupyter notebooks “in the cloud” - you can think of it as a poor man’s JupyterHub to teach the participants about Kubernetes. After this workshop, the participants will be able to: 1) Describe containers and container orchestration 2) Describe the architecture and components of Kubernetes 3) Run a Kubernetes cluster on their notebook 4) Deploy and manage services on the cluster 5) Write declarative recipes for reproducing the setup universally

TDD in Python with pytest

Leonardo Giordani

Test-Driven Development is a methodology that can greatly improve the quality of your software. I strongly believe that developing software without following as much as possible a test-driven approach leads to massive delays and greater issues when requirements change (always, that is). In this workshop we will develop a very simple Python project following TDD with the help of the pytest framework. We will work together, and no previous knowledge of testing or the testing framework is required. A minimum knowledge of Python is required, but the project will be very simple, so that we can focus on learning the testing methodology. Presented at London PyGirls Meetup and PyCon UK 2018. The workshop lasts 3-4 hours, it cannot be squeezed in 2 hours. If there is no longer slot available I have to withdraw the proposal

Dive into object-oriented Python

Leonardo Giordani

Each language has its own object-oriented implementation, that can differ in subtle or unexpected ways from others. Newcomers to Python - whether they are coming from another language, or learning programming through Python for the first time - sometimes encounter some ‘strange’ issues, but understanding Python’s OOP implementation will help make many of them seem a lot less strange. This workshop will introduce beginners to Python’s beautiful but sometimes peculiar implementation of OOP concepts. It’s ideal for people who have a bit of Python knowledge and experience, and need to move from first steps to a deeper understanding. The workshop has been presented at many Python conferences, including PyCon Ireland in past years and has always been full packed of attendees. This year I completely reworked the slides to better explain some of the points where attendees struggled in the past.

Natural Language Processing: An Application for Public Policy

Ancil Crayton

In this session, I would like to apply NLP methods to explore press releases by governments in order to understand public policy decisions. This talk will be based on my PhD research in applying topic models to Federal Reserve communication and analyzing how important themes influence financial markets. I will walk through a Jupyter notebook that covers the main steps of text preprocessing, feature extraction, topic modeling (LDA or NMF), visualizing topics, and possibly regression analysis to assess the impact of the information financial markets. It will include mentions to packages like gensim, Scikit Learn, Word Cloud, and possibly statsmodels. I will make this available in a public repo on Github or a notebook on Google Colab that will allow participants to follow along as well. I hope that this session will inspire participants to apply their Python and data science skills to interesting problems lurking in public policy.

Write your own Bitcoin clone in Asyncio

Rigel Di Scala

Richard Feynman wrote on a blackboard: "What I cannot create, I do not understand". We can build a solid understanding of the Bitcoin protocol by implementing a clone from its fundamental building blocks: asymmetric encryption, a peer-to-peer network (using the asyncio module), and proof of work. This "talkshop" explains how cryptocurrencies work in simple terms and with practical examples. Just some basic knowledge of Python required. We will also learn about cypherpunk culture, the value of money, and how decentralised systems work.

Location