On Isomorphism Invariance and Isomorphism Reflection in Type Theory
Isomorphism Invariance, also called the Principle of Isomorphism and the Principle of Structuralism,
is the idea that isomorphic mathematical objects share the same structure and relevant properties, or that whatever can reasonably be done with an object can also be done with an isomorphic copy
Isomorphism Invariance: “If A ≅ B and Φ(A) then Φ(B).”
Without any limitation on Φ, we may take Φ(X) to be A = X and obtain:
Isomorphism Reflection: “If A ≅ B then A = B.”
Conversely, Isomorphism Reflection implies Isomorphism Invariance by Leibniz's Identity of Indiscernibles.
Depending on how we interpret ≅ and =, these principles might be plainly false, uninspiringly true, or a foundational tenet:
- In set theory, if ≅ is existence of a bijection, the principles are false.
- In set theory, if ≅ is isomorphism of sets qua ∈-structures, both principles are true because Isomorphism Reflection is just the Extensionality axiom.
- In type theory, if = is propositional equality and ≅ is equivalence of types, Isomorphism Reflection is (a consequence of) the Univalence axiom.
While Isomorphism Invariance is widely used in informal practice, with ≅ understood as existence of structure-preserving isomorphism, Isomorphism Reflection seems quite unreasonable because it implies bizarre statements, e.g., that there is just one set of size one. Any formal justification of the former must therefore address the tension with the latter principle.
In this talk I will review what is known about the formal treatment of the principles, recall the cardinal model of type theory by Théo Winterhalter and myself which shows that Isomorphism reflection is consistent when = is taken as judgemental equality, and discuss the possibility of having other models validating judgemental Isomorphism reflection that might be compatible with non-classical reasoning principles. I shall also touch on the possibility of a restricted form of Isomorphism reflection that would provide a satisfactory formal definition of “canonical isomorphism”.
Title: Intersection and Simple types
When, in the seventies of the last century, Coppo and Dezani designed intersection types, their main motivation was to extend the typability power of simple types, adding them an intersection connective, enjoying associativity, commutativity and idempotency, so denoting set formation. In fact, the simple types system can be seen as a restriction of the intersection type system where all sets are singletons. Quite early intersection types turned out to be useful in characterizing qualitative properties of λ-calculus, like solvability and strong normalization, and in describing models of λ-calculus in various settings. A variant of intersection types, where the intersection is no more idempotent, has been recently used to explore quantitative properties of programming language, like the length of the normalisation procedure. It is natural to ask if there is a quantitative version of the simple type system, or more precisely a decidable restriction of non-idempotent intersection system with the same typability power of simple types. Since the lack of idempotency, now the intersection corresponds to multiset formation, so (extending the previous reasoning) the natural answer is to restrict the multiset formation to copies of the same type. But this answer is false, the so obtained system is decidable, but it has less typability power than simple types. We prove that the desired system is obtained by restricting the multiset formation to equivalent types, where the equivalence is an extension of the identity, modulo the cardinality of multisets.
Title: The differentiation monad
In this talk, we will see how a twist in the continuation monad allows us to express the quantitative semantics of programs. I will explain how this new monad comes from the search for a fourth missing rule in Differential Linear Logic, an extension of Linear Logic allowing for the linearization of proofs. Finally, we will dive into concrete interpretations of this monad and put forward the link with graded monads.
Title: Verified Extraction from Coq to OCaml
A central claim to fame of the Coq proof assistant is extraction to
languages like OCaml, centrally used in landmark projects such as
CompCert. Extraction was initially conceived and implemented by Pierre
Letouzey, and is still guiding design decisions of Coq's type theory.
While the core extraction algorithm is verified on paper, central
features like optimisations --of which there are 10 the user can
enable-- only have empirical correctness guarantees.
In the scope of the MetaCoq project, which aims at placing Coq's type
theory on a well-defined and fully formal foundation, I am working with
other members of the MetaCoq team on a re-implementation and
verification of all aspects of Coq's extraction process to OCaml. The
new extraction process is based on a formal semantics of Coq as
provided by MetaCoq and a formal semantics of the intermediate language
of the OCaml compiler derived from the Malfunction project.
In my talk, I plan on discussing the current state of this
verification, its goals, possible extensions, and design decisions
along the way, a discussion of the trusted computing and theory bases
(and in particular ideas for reducing them), arising problems with Coq
and the surrounding infrastructure, and the impact on other projects. I
will conclude with thoughts on how other proof assistants can learn and
benefit from the lessons we have learned.