stitcherLogoCreated with Sketch.
Get Premium Download App
Listen
Discover
Premium
Shows
Likes

Listen Now

Discover Premium Shows Likes

Compositional

4 Episodes

72 minutes | 14 days ago
The Haskell Language Server with Zubin Duggal
The Haskell Language Server (HLS) has shown how powerful the wealth of compile time information is that Haskell brings combined with instant feedback. For many, it is now a standard component of their development workflow. The HLS runs a complex, finely tuned machinery that handles large amounts of data about files, expressions, types and more, syncing it with the editor and continuously providing relevant information that can be displayed to the user. In this episode, Roman Cheplyaka talks to Zubin Duggal who contributed to HLS for years, personally, through various Summer of Code projects and recently as a Tweag Open Source Fellow. He explains how HLS emerged from various predecessors. the role the Language Server Protocol played in reducing the implementation efforts and then dives deep into the inner workings of this tool. If you want to understand what is happening when your editor shows information about your code, or if you want to get involved yourself to contribute to HLS or similar tools this episode is for you! Special Guest: Zubin Duggal.Links:The Haskell Language Server repositoryThe Language Server ProtocolA blog post about Zubin's work as a Tweag Fellow
37 minutes | 2 months ago
A content addressable store for Nix with Théophane Hufschmitt
In this episode, Rok Garbas interviews Théophane Hufschmitt who is implementing a content addressed storage for Nix. Théophane explains why this feature is so useful to have for build systems and why he started working on it. He also gives a glimpse into what working with the core Nix C++ codebase feels like. Nix packages are usually addressed by the hashes of all build inputs from which they are derived and not by their content, the build output. This makes a lot of sense for a package manager because we can identify and retrieve a package precisely by the sources, build instructions and dependencies that it corresponds to. However, there are situations where it is advantageous to access a package by content. For example, to avoid unnecessary recomputations when packages produce the same build outputs even when their build inputs vary - a feature called early cutoff. More information in the links below!Special Guest: Théophane Hufschmitt.Links:Tweag blog: Towards a content-addressed model for Nix — A brief overview and introduction of the why and how of the content addressable store in Nix.Tweag blog: Self-references in content-addressed Nix — This post goes into details why the combination of a build input addressed store with a content addressed store is not as easy as it seems.The original RFC that proposes to implement a CAS for Nix — New Nix features are proposed in a RFC (request for comments) document on GitHub and then reviewed. This is the original proposition for implementing a content addressable store with the associated discussion.Build Systems à la Carte — This research paper gives an overview of a variety of build system the features they support. The ability to "early cutoff" computations when certain output targets already exist will be unlocked by the content addressable store in Nix.An introduction to content addressable storage — This article gives a short introduction to content addressable storage using the example of IPFS which is also referred to in this episode.The section in the original Nix PhD thesis that already proposes a content addressed store model — This so-called intensional store model aims to make guarantees about internal properties of software packages model such as file content. This is opposed to the common extensional model in Nix that aims to only make guarantees about relevant external properties.
43 minutes | 3 months ago
Subsumption and impredicative types with Richard Eisenberg
Subsumption, the process of figuring out whether one type is the subtype of another, is fundamental to GHC's type checker and was recently changed. In this episode, Richard Eisenberg explains what subtypes are, how subsumption works, and why some previously accepted programs will soon start to be rejected by GHC. He then talks about how these changes help with inferring impredicative types, an advanced form of polymorphism that basically allows you to put forall statements anywhere in a type signature such as inside of a list. Music by Kris Jenkins.Special Guest: Richard Eisenberg.Links:The Wikipedia article on subtyping and subsumption — "[...] a subtype is a datatype that is related to another datatype (the supertype) by some notion of substitutability, meaning that program elements, typically subroutines or functions, written to operate on elements of the supertype can also operate on elements of the subtype [...] In type theory the concept of subsumption is used to define or evaluate whether a type S is a subtype of type T. "Explanation of Levity Polymorphism on StackOverflow — This concept is briefly touched in the episode.The GHC proposal arguing for stricter subsumption judgement. — This proposal initiated a large part of the work that Richard is talking about. The changes that it brought about will be included in GHC 9.0.A short explanation of Impredicative Types on the Haskell Wiki — "Impredicative types take this idea to its natural conclusion: universal quantifiers are allowed anywhere in a type, even inside normal datatypes like lists or Maybe. [...] However, impredicative types do not mix very well with Haskell's type inference [...]"The Quick Look Impredicativity GHC proposal. — The goal of this proposal was to significantly enhance the current state of impredicative types in Haskell. It has been accepted, implemented and will also be available in GHC 9.0.
55 minutes | 3 months ago
The new random with Leonhard Markert and Dominic Steinitz
Haskell's random library has provided an interface to pseudo-random number generation for non-cryptographic applications since the Haskell 98 report. Over the last years there hasn't been much development activity on the library despite well-known quality and performance problems. Alexey Kuleshevich's blog post comparing the performance of pseudo-random number generator packages showed that even when used with a fast pseudo-random number generator, the interface provided by random slowed down the generation of random values significantly. So a little group consisting of Alexey Kuleshevich, Alexey Khudyakov as well as the two guests of this episode got together to improve the random library both in terms of quality and performance. This work culminated in the release of version 1.2.0 of the random library in late June. In this episode, we talk about the work that went into this release, and some of the things we discovered along the way. Music by Kris Jenkins.Special Guests: Dominic Steinitz and Leonhard Markert.Links:"random" in The Haskell 98 Library Report — The description of Haskell's random library as it was used for a long time.Benchmarks of "random" — The benchmarks in this blog post were one reason to consider moving to a new standard pseudo-random number generator in Haskell random.blog post on Tweag I/O — This article goes step by step through the motivation and technical details of the changes that are discussed in this podcast episode.the new version of "random" on hackage
COMPANY
About us Careers Stitcher Blog Help
AFFILIATES
Partner Portal Advertisers Podswag
Privacy Policy Terms of Service Do Not Sell My Personal Information
© Stitcher 2020