akuklev: (Свечка и валокардин)
[livejournal.com profile] orleans: У ФП есть обьективные преимущества в плане исполнения на многопроцессорных системах и организации юнит тестов. Все то что можно загнать в трубу функции без сильного усложнения кода - нужно загонять.

[livejournal.com profile] akuklev: Да хрень это всё на самом деле насчёт многопроцессорных систем и организации юнит-тестов. Если взять императивный код и куда-то его загнать, он от этого лучше не станет ваще, ни в плане юнит-тестов, ни в каком-либо ещё плане.

Наши дедушки писали спецификации программ, а потом их реализации (в машинных кодах). Мы дожили до времён, когда можно писать одну только спецификацию и её запускать. Уже когда написали “спецификацию”, запустили и убедились, что она корректна — вот после этого можно уже посмотреть: если что-то работает слишком медленно, не параллелится или памяти много жрёт, можно отдельные места низкоуровнево оптимизировать. И не надо думать, что низкоуровневая оптимизация это обязательно на ассемблере/Си и прочих close to machine языках. Когда вещь, которая естественным образом выражается через цикл, насильно запихивают в рамки ФП чтобы она лучше параллелилась — это низкоуровневая оптимизация.

Спецификация это такое описание процедуры или модуля, из которого очевидно, что именно они делают (такое описание не единственно, и выбор варианта дело вкуса). Ну вот например одна из спецификаций процедуры sort:
“Процедура sort берёт набор элементов и возвращает список, состоящий из минимального элемента этого набора, потом минимального элемента оставшегося набора и так далее.”
 sort(items: Сollection) = min(items) cons sort(items without min(items))
 sort(Collection.Empty) = List.Empty


Чем хорошо такое определение? Тем что оно функционально? — хрена с два; то, что оно функциональное это мелкая частность и стечение обстоятельств. По-настоящему оно хорошо тем, что соответствует понятному непрограммисту описанию на естественном языке и при этом максимально просто, вероятность допустить в таком определении ошибку минимальна. А если ещё делать перекрёстное ревью кода и написать пару юнит-тестов, то вообще исчезающе мала.

А вот если, скажем, процедуру sort определять через алгоритм, который хитрый и ещё надо доказывать, что он вообще работает, возникает вопрос что значит "доказывать что Х работает верно". Это значит, что нужно доказать что Х работает эквивалентно своей спецификации. А где эта спецификация? Её так или иначе надо написать, и доказательство так или иначе провести, потому что иначе вероятность ошибки колоссальна: я знаю как ошибки в нетривиальной алгоритмике выявлялись спустя годы использования софта в продакшине, то есть никакое закидывание юнит-тестами бы их не вскрыло. Вот потому я и адепт того, чтобы писать программы их конструктивными спецификациями, а оптимизации использовать максимально редко.

ФП я люблю ровно постольку поскольку функциональный стиль часто идеален для написания конструктивных спецификаций функций. Однако для написания конструктивных спецификаций модулей и сущностей доменной модели приложения, которое я делаю, функциональщины обычно нехватает. Поэтому в ещё большей степени я адепт мультипарадигменных языков, где можно за день накатать DSL отвечающий доменной модели и дальше писать код в таком вот спецификационном стиле, избегая частностей.

Сейчас таких языка, кстати, два: Perl и Scala. Оба люто прекрасны, однако имеют и общий недостаток: в руках неопытного программиста они как боевая граната в руках у двухлетнего карапуза. Перл лучше для rapid prototyping (потомучто без статической типизации), Скала для всего осталього (потомучто со статической типизацией).
akuklev: (Default)
I'd like to share some ideas on features that might be included into next major Scala releases. Some of them might come in question already for Scala 2.11, some are probably more appropriate for Scala 3.

Read more... )
akuklev: (Default)
По модулю опечаток, кривых формулировок и недовычитанности текста, закончил пост Vision of Scala 3, который о том, куда на мой взгляд нужно развиваться языку Скала, и как я его будущее вижу.

Хочу закинуть это добро в scala-debates, но сначало надо вычитать и поправить ошибки. Граждане, помогите пожалуйста в этом нелёгком деле! Да и вообще, любые отклики будут полезны. Пост уже много раз уточнялся и улучшался по мере появления комментариев.
akuklev: (Default)
Often there are many implementations of data structures and algorithms with different performance. Currently, Scala provides sensible defaults:
val xs = List(1, 5, 2, 8, 4)
val xSorted = xs.sort
Unfortunatelly, there is no elegant way to override the defaults both locally or for an entire scope. In our opinion the most elegant way to achieve it are the implicits. If factories and methods with multiple implementation would accept the desired implementation as an implicit argument, this is how we'd be able to override the defaults:
implicit val sortMethod = SortMethod.TimSort
implicit val listVariant = ListVariant.DoublyLinkedList
val xs = List(1, 5, 2, 8, 4)
val xSorted = xs.sort
// and now override them locally
val xs2 = List(1, 5, 2, 8, 4)(ListVariant.XorList)
val xSorted2 = xs.sort(SortMethod.QuickSort)
This approach could be extended to numeric literals, we need the literals 42 to be intepreted as Int("42") where Int.apply takes an implicit argument "variant" (Int16, Int32, Int64, BigInteger) and 42.1 as Fractional("42.1") with implicit argument "variant" (Float, Double, Decimal, BigDecimal, FixedPoint(accuracy), Rational(accuracy), ContFraction(accuracy)):
implicit val intVariant = IntVariant.BigInteger
val x = 234794320943279322878

In order to completely separate implementation/optimization details from the substantial part of the algorithm, we need an analogon of Cascading Style Sheets. I would propose the following format:
«scope selector» {
  «type» = «implicit value of that type»
  ..
}

Here's an example:
com.miriamlaurel.test432 {
  SortMethod = QuickSort
  Ordering[String] = AlphabeticOrders.Swedish
  FractVariant = BigDecimal

  TestObj#cacheCriticalMethod {
    SortMethod = ShellSort
  }
}

I'd also provide the !important keyword from CSS with an optional argument "salience" for manual control of implicits priorities.
IntVariant = BigDecimal !important(100)
Apart from support for numeric literals, everything described above can be implemented with means already present in Scala 2.10 (mainly, macros). What do you think?

(https://groups.google.com/d/topic/scala-debate/DWbSe_Erkrc/discussion)
akuklev: (Default)
This is an essey about a programing language that does not yet exist.
I'm going to show you what you can do with an extension of Scala 2.10 presented here and why this extension has certain universality. Please read the linked post, I'm going to use new syntax proposed there throughout this post.

Type system should Scala 3 be based on

Scala 3 certainly has to be based upon a type system called “Dependent Object Types”. It's rather complicated type system with both nominal and structural types and subtyping. Its subjects are essentially dependent tuples which can contain abstract type members besides normal members. There are also product types (functions), but their construction and elimination rules are not yet settled. Together, sum and product types provide a complete propositions-as-types encoding for first-order logic, which relates it to constructive type theories.

Due to presence of nominal types DOT can be an open-world theory, and it really is: it does not assume that all types it knows (by name) were necessarily defined within dot. So there might be foreign types that are not defined (or definable) within the system, but one can still work with objects of these types when the come with modules of operations working on them. Even though these operations are not and cannot be defined within DOT, they still can be used in it: one can compose them with other operations of compatible signatures. A concrete example would be the type of Reals, which are in general not representable in finite amount of computer memory and thus not definable within any constructive type theory, but assuming such type "exists in the outer world" we can write down programs working with them and reason about this programs even though we're not able to execute them. What's the use of that? — Well, after having defined a program for Reals we can translate it into a program manipulating Doubles (floating point numbers) and analyse if the latter is a correct approximation of the former. The nominality and open-world setting is a big advantage over Per Martin-Löf style type systems.

As it was already mentioned, DOT does not yet specify precise construction and elimination rules for product types. There were efforts to define the rules in such a way that DOT gains the strength of Fω with subtyping, but they led to contradictions. It's quite likely that this is possible and there is an ongoing work on this issue, but in this essay I will argue that we should stick to the small DOT core without extending it even to the full strength of System F with unrestricted recursion!
My vision is a weak core and pluggable completions producing dialects for reflecting different topoi (mathematical setups) to work in. There could be a strong Fω-like Scala variety and a weak variety with totality checking. The core is flexible enough to be adapted for arbitrary topoi to work in, from the classical ZFC set theory to Calculus of Constructions to Homotopic Type Theory. It's of course also fit for the legacy Scala topos, constituted by computations which can be carried out on an idealized JVM.

What are theories and models?
(Exposition of the syntax we need to describe theories comfortably)

Classical quote of Nicolas Bourbaki reads “Mathematics is a science that studies abstract structures”. In modern language one could rather say, a mathematics studies theories and their models.

A theory consists of a signature and a couple of properties it must conform to. Signature consists of a couple of abstract type members and members of specified types. Here's a signature of a semigroup (not quite complete):
trait Semigroup {
  type G
  def compose(a: G, b: G): G
}

If we consider a theory to be a structure endowing a type, it's better to use a typeclass notation:
trait Semigroup[G] {
  def compose(a: G, b: G): G
}
А model for a given signature is a named tuple assigning concrete types, concrete implementations of operations and concrete proofs demanded by signature. Concrete means, they are explicitly constructed in the topos (or a weaker realm) inside of which we model our structure. Trait (signature) is a type for corresponding models.

Whenever we want to have a type endowed with certain structure (say, Semigroup), we write type G: Semigroup. This notation is suggestive that typeclasses act like types of types, but actually it's just a shorthand for {type G; def model: Semigroup[G]}, a pair of a type and a model of the theory for this particular type.

Predicates can be modeled either as types or like operations returning Boolean, the latter is easier to understand for programmers and is also the topos-theoretic way. Here's a signature for equality theory:
trait Eq[T] {
  def _==(a: T, b: T): Boolean
}
We can also incorporate properties. Say, == must be built so, that (x == x) is true for any x. So, we might demand the proof of that fact in the signature. For boolean expressions, there's a macro type Evidence(expr), which is the type of proofs that the expression is true. (I'll use the unicode sortcut ✓ for Evidence) So, that's how we encode it:
trait Eq[T] {
  def _==(a: T, b: T): Boolean
  def reflexivity(x: T): ✓(x == x)
  def symmetry(x: T, y: T)(a: ✓(x == y): ✓(y == x)
  def transitivity(x: T, y: T, z:T)(a: ✓(x == y), b: ✓(y == z)): ✓(x == z)
}
We want to incorporate the property of associativity into the signature of the semigroup, so the elements of the group must have equality:
trait SemiGroup[G] extends Eq[G]{
  def compose(a: G, b: G): G
  // Alias for compactness:
  def _∘(a: G, b: G) = compose(a, b)
  def associativity(a: G, b: G, c: G):
    ✓((a ∘ (b ∘ c)) == ((a ∘ b) ∘ c))
}
Few more examples for you to get a feeling:
trait Monoid[M] extends SemiGroup[M] {
  def unit: M
  def leftCancelativeness(x: M): ✓(unit == unit ∘ x)
  def rightCancelativeness(x: M): ✓(unit == x ∘ unit)
}
type VectorSpace[V] extends AbelianGroup[V] = {
  type F: Field
  def action(scalar: F, vector: V): V
  ..
}

How to write proofs
Say you want to prove that a monoid has only one unit, i.e. every element of monoid havind the left or right cancellation property is equal to the unit. In other words, we want to define a procedure which being “given an element unit2 with left cancellation property” produces an “evidence that unit2 is the same as unit”:
def lemma1[M: Monoid](unit2: M)(leftCanc2: (x: M) => ✓(unit2 == unit2 ∘ x)) = {
  // apply left cancellation property of unit2 to unit and vice verse
  val p1: ✓(unit2 == unit2 ∘ M.unit) = leftCanc2(M.unit)
  val p2: ✓(M.unit == unit2 ∘ M.unit) = M.rightCancellation(unit2)
  // now we just have to simplify “unit2 == something == unit” to “unit2 == unit”
  val p3: ✓(unit2 ∘ M.unit == M.unit) =
      M.symmetry(M.unit, unit2 ∘ M.unit)(p2)
  val p4: ✓(unit2 == M.unit) =
      M.transitivity(unit2, unit2 ∘ M.unit, M.unit)(p2, p3)
  p4
}
The used dependent function type is a shortcut:
(x: M) => ✓(unit = unit ∘ x) ≡ {
  def apply(x: M): ✓(unit == unit ∘ x)
}

The notation Type[_] is used not only for typeclasses, these are generally ‘types parametrized by other types’. Due to availability of subtyping constraints on abstract type members, in DOT we can demand abstract type members (or type parameters) of traits to be not just types, but types parametrized by other types, so called higher-kinded types.

Beautiful example is provided by semicategories and categories which are "typed" versions of semigroups and monoids. You can always think of a semigroup as consisting of functions with their usual composition which is naturally associative. These must be functions type T => T for some fixed type T (say, real numbers) so that their composition is always defined. Semicategory is the generalization for the case we have many types and only functions with matching source and target types can be composed. Monoid is semigroup with unit, category is a semicategory with units. The signatures of semicategory and category repeat the ones of semigroup and monoid in all respects except that in first ones we had one type T of the semigroup/monoid elements, whereas in second ones we'll have parametrized (by source and target) families Arr[_,_] of element types:
trait SemiCategory[Arr[_,_] : Eq] {
  def compose[A, B, C](a: Arr[A, B], b: Arr[B, C]): Arr[A, C]
  def _∘[A, B, C](a: Arr[A, B], b: Arr[B, C]) = compose(a, b)
  def associativity[A, B, C, D](a: Arr[A, B], b: Arr[B, C], c: Arr[C, D]):
    ✓((a ∘ (b ∘ c)) == ((a ∘ b) ∘ c))
}
trait Category[Arr[_,_] : Eq] extends SemiCategory[Arr[_,_]] {
  def id[O]: Arr[O, O]
  def leftCancelativeness[O](x: M): ✓(id[O] == id[O] ∘ x)
  def rightCancelativeness[O](x: M): ✓(id[O] == x ∘ id[O])
}
(In Scala ≤2.10, the notation T[_] presupposes that the polymorphic type T should accept any types as arguments and in general, when bounds of type-parameter in a polymorphic definition are not specified, absence of bounds is presupposed. We suggest that in Scala 3 when no bounds for type parameters are specified, the most broad bounds allowed by the rest of the signature should be taken. It spares vast amounts of unnecessary writing. In particular, we want the definitions above to be applicable to types of functions on restricted or extended type universes, e.g. Function[A <: Collection, B <: Collection] (function between collection types) or Function[A: Ord, B: Ord] (functions between types endowed with order).)

The last thing we want to explore, is how to write signatures of theories which make use of ‘powerset’ and ‘subsets’ (powerobjects and subobjects in general setting, where we don't restrict us to set-theoretic topoi). Well, in general topoi, these are constructed using the _ => _ type (type of maps from one type to the other) and Boolean type (which is called subobject classifier in general setting). First note, that ‘subsets’ S of A are classified by maps isMember: A => Boolean.
So, the type of subobjects of T can be defined as
case class Restriction[T](isMember: T => Boolean) {
  type Element = T with {self =>
    membership: ✓(isMember(self))
  }
}
(We can explicitly define the type for elements of the restriction as a dependent pair)

Once you have a restriction r: Restriction[S], you can define functions from elements of this restriction:
val even = Restriction[Nat](isMember = {_ % 2 == 0}) 
def bisect(x: even.Element): Nat = ...

We can also easiliy define structures like topology:
trait TopSpace[S]  {
  def topology: Restriction[Restriction[S]]
  ...
}

In DOT there is a subtyping relation which allows to use expressions of one type in the context where another type is expected in cases this is entirely safe and transperent. It solves the following two issues:
– Functions A => B to be applicable to terms of types which are restrictions of A;
– Stronger structures such as monoids to be accepted in context where a weaker structure (such as a semigroup) is required.

DOT uses a cumbersome mixture of nominal and structural subtyping which we I don't want to describe here completely, but I just want to underline that the mentioned features are solved by DOT perfectly: restriction types (as defined above) are subtypes of the types they restrict, stronger signatures are subtypes of weaker ones. Subtyping relation enables specifying lower and upper bounds for abstract type members of traits, which is precisely the mechanism that makes nominal induction, higher kinded types and lots of other valuable things possible.

Subtyping relation for types internal to DOT (i.e. traits) is even a complete one, traits build a lattice, their least upper bounds and greatest lower bounds are always well defined, which can be used to demand a type to conform to two signatures in a coherent way:
radixSort[T: Ord ∨ Weighted](xs: Collection[T])

The complicated issue is the subtyping for abstracted value types. We want all containers (List, Tree, etc) to be subtypes of Collection and Nat (natural numbers) to be subtype of Int. Iff abstracted type A (say, Nat or List) has a unique natural embedding into B (say, Int or Collection), we want that A <: B. So far I have no concrete idea how to implement it on the type system level. It's a rather deep issue: all minimal models of Peano Arithmetic have unique natural embeddings into each other, which means that while there is a multitude of models, there is exactly one abstracted type of natural numbers Nat which is also isomorphic to the universal Semiring. A type system that could take account of such subtleties must be at least as strong as HoTT.

* * *

This was an exposition of the trait definition sublanguage of Scala. When you're not using type providers and path-dependent types, like Evidence or .Element, the language has no term-level constructions, we only define member names and their types in a straightforward ways, interpretable in any category with finite products (or more generally, any circuitry) with all external types we use, like, say, Boolean, provided. The trait itself (as a type) will lay inside the category/circuitry iff the trait contains only types and constants or the category/circuitry is a (cartesian) closed one. Otherwise it lays in the enclosing topos.

When dependent types are used, suddenly the term-level language arises (see the expressions like (x ∘ unit) above). They can contain creation of modules and member extraction (harmless things interpretable in all circuitries), but also application and (dependent) abstraction, which are interpretable only in (locally) closed categories/circuitries.

By enhancing or restricting the richness of terms allowed in trait signatures (i.e. by working in more or less powerful circuitries), one can precisely control the tradeoff between expressiveness and manageability.

Typeclass-based polymorphism and open world setting

As we already mentioned, DOT doesn't assume, all types are defined inside of DOT, some might come externally (open world). Now consider a polymorphic function
def foo[T](x: T, y: T)
What can be done to the arguments in the body of the function? Absolutely nothing! Because, there is nothing you can do to values of unknown type, not even check their equality. Even though all types that can be defined within Scala automatically come with equality, we do not assume that all types have equality. The open world setting for typed systems reads “Inconstructibility of type does not imply its nonexistance”, which is closely related to open world assumption of ontology systems “Inability to derive a fact does not imply the opposite”. In practice, the types that are nonconstructible represent codata, such as possibly infinite sequences or (true) real numbers. (Of course, you can generate some infinite sequences deterministically and construct lots of real numbers, but constructible sequences/reals form an infinitely small fraction of all possible ones.) Constructible are only the interfaces (typeclasses) for codata types, Seq[Int] being an example.

To be able to do something, you must restrict, you must assume that T is endowed with some structure. For instance, like this:
def foo[T: Eq](x: T, y: T)
This is a shorthand for
def foo[T](x: T, y: T, T: Eq[T] = implicit)
So, the function takes an additional hidden argument: the model for the theory Eq[T], i.e. a tuple, where the operation eq(x: T, y: T) that checkes the equality, is defined. (It's no problem that I have used the same name for the type and for the associated model, they can always be differentiated from the context.) Now we can write something into the body of foo, for instance
def foo[T: Eq](x: T, y: T) = (x == y)
One of the main strengths of Scala is the presence of encapsulation, i.e. we can abstract out the implementation details of a certain type and forbid any functions to interact with instances of the type in any other way through the defined interface (signature).
implicit val m: Monoid = new Monoid {
  type M = (Nat, Int)
  def compose(a: M, b: M): M = ???
  // Implementation details, which are aware that M = (Nat, Int)
}
type M = m.M
// Implementation details of M are now hidden, you
// cannot find out that M = (Nat, Int) in the outer scope.*
// The only thing you know about it is that M : Monoid,
// since there is a defined implicit model Monoid[M].

There is a plenty of real-life examples for type-class bound polymorphism
def sort[T: Ord](xs: Collection[T]): List[T]

... and very few with unbound polymorphism:
def first[T](xs: List[T]): T
def the[T](x: Collection[T], uniqueness: ✓(size(x) == 1) = implicit): T
In order to ensure you're working with a type which is defined within Scala, use the typeclass TypeTag:
def foo[T: TypeTag](x: T)
Within body of this function you can be sure that T is not some foreign stuff like type of Real numbers, but a type defined by hand in some Scala source file, you can perform reflection, i.e. analyze the type signature and so on. Universe of TypeTag'ged type is the native topos of Scala.

You can work in other topoi as well. For example we can create a model of Agda II language type system within Scala.
Here's the code in Agda
data Nat: Set where
  zero: Nat
  succ: Nat → Nat
data List (T: Set): Set where
  nil: T → List
  cons: T → List T → List T

Here's the Scala code (Agda'esque DSL):
val nat = agda.datatype(constructors: Map(
  zero -> nat,
  succ -> (nat → nat)
))
def list(t: agda.Set) = agda.datatype(constructors: Map(
  nil -> list(t),
  cons -> (t → list(t) → list(t))
))

Now we can supply this model with a Scala type provider, i.e. a macro which converts t: agda.Set into native Scala types:
type Nat = agda.Type(nat)
type List[T: agda.Set] = agda.Type(list(T))

When creating the native type T type provider also puts the implicit value agda.Set[T] into the scope. It is the Agda'esque reification of this type much like TypeTag[T] is the native reification of T for native Scala types. Consider the following example where Scala signature transforms to an Agda'esque signature upon erasure:
last[T: agda.Set](xs: agda.List[T]): T
// Desugar context bounds:
last[T](t: agda.Set, xs: agda.List[T]): T
// Desugar type aliases
last[T](t: agda.Set, xs: agda.Type(agda.list(t))): T
// Pre-erasure type normalization
// (Special macro is being called before erasure)
last[T](t: agda.Set)(xs: agda.Type(agda.list(t))): agda.Type(t)
// Perform erasure
{val t: Set} => Type(list(t)) => Type(t)
// Compare with Agda:
{T: Set} -> List T -> T
(The only difference is that in Agda there is no difference between term-level and type-level, whereas in Scala you have to lift between levels; the lift from term-level type representation into the type-level is agda.Type(type).)

We can also define the model for ZFC set theory, but as opposed to Agda topos which can be equipped with operational semantics and used for computations, set theory can be essentially used only for reasoning and making proofs, because most of its operations operations are not constructive (either non-computable, or non-deterministic such as taking an arbitrary element from a set of size > 1). The type provider for ZFC-sets could produce only mock types, which wouldn't be a problem since the proofs are never executed, only (type)checked.
akuklev: (Default)
Женя [livejournal.com profile] xeno_by сказал, чем ему нравится Скала; что-ж, попробую и я.

Мне всегда казалось, что язык программирования (ну и proof assistant, это же одно и то же), должен в первую очередь уметь отражать математические концепции без скидок, по честному. Чтаобы на языке можно было с одинаковой лёгкостью работать в рамках стандартной теории множеств (ZFC) или теории зависимых типов, как она реализована в Agda 2.

Для этого нужно, чтобы полиморфизм был синтаксическим (erasure polymorphism), т.е. типы (например «натуральное число» или «действительное число») сперва рассматривались чисто как ярлычки, привязанные к термам, и лишь на следующем этапе совершалась привязка содержательного, то есть появлялись term-level объекты, связанные с ярлычками — ассоциированные с типами модели. Scala обладает именно таким полиморфизмом является базовой причиной её гибкости.

Рассмотрим на конкретном примере в идеализированном варианте Scala, как это может работать. Итак, возьмём функцию
foo[T](x: T, y: T)

Это функция полиморфная по T, принимающая два аргумента типа T. Что мы можем с ними сделать? Решительно ничего! Не бывает никаих операций, определённых для любых объектов любых типов.

Как сделать, чтобы с ними что-то можно было делать? Read more... )
akuklev: (Default)
The key feature of dependent type systems is that we can write type-valued functions. Recently I encountered a case I could use such one in Scala for improving code readibility. I want to be able to write Reads(res1, res2) to get (ReadsAbstract {val reads = Set(res1, res2)}. Well, there are no type-valued methods in Scala, but I'm still able to get really close to it:

def ReadsGenerator(res: Resource*) = {

  new {type X = ReadsAbstract {val reads = Set(res:_*)}}

}


Now we can write Reads(res1, res2).X for the purpose I originaly wanted to use Reads(res1, res2) for.

It's worth to define some syntactic sugar for it! First, down with that nasty X. Second, let's add some sugar for generators:

type Reads(res: Resource*) = ReadsAbstract {val reads = Set(res:_*)}

Features:

– Such syntax can be easily extended for support of macro types in exactly parallel fashion to current syntax for macro methods. Like type DbTable(jdbcPath: String, creds: DbCredentials) = macro impl

– With this syntax we also obtain nice syntax for dependent tuples and dependent function types.

I would like to thank Nada Amin (ندى أمين) from EPFL for a very insightful discussion on DOT, the future of the Scala type system.
akuklev: (Default)
Презентация конечно ещё не вполне готова, но все таки рискну её уже выложить, чтобы собрать первые комментарии. Procedure Typing for Scala:
akuklev: (Default)

Preliminaries

Let's call a polymorphic type F[T] enveloping if it's equipped with eval: F[T] => T and reify: T => F[T] (reify often being a macro) satisfying eval(reify(expression)) = expression and reify(eval(enveloped)) = enveloped.

A typical and motivating example where reify is a macro is Expr[T]. Examples where no macros are involved are Future[T] (with await as eval) and Option[T] (with getOrElse {throw new NoValue()} as eval).

Autounboxing is the following behaviour: whenever a value of the enveloping type F[T] is found where a value of type T is accepted, eval is called implicitly.* In homoiconic lauguages (including Scala), one can consider all written expressions to be initialy of the type Expr[T] and being autounboxed into T by omnious implicit autounboxing rule Expr[T] => T.

Proposal

We propose to introduce an instruction implicit[F] which enables autounboxing for F in its scope and quotation brackets F<[expr]> which temporarily disable autounboxing.

Use cases

Code intensive in futures and promises can be made much more readable by autounboxing. To take an example, one can look at Dataflows in Akka 2.0. This is how their code looks now:
flow {

  z << (x() + y())

  if (v() > u) println("z = " + z())

}

Using autounboxing this can be recast without any unintuitive empty paranthesis:
flow {

  z << x + y

  if (v > u) println("z = " + z)

}


The main purpose of this syntax is however to be understood in course of the scala purification programme, in which all mutable (volatile) values are distinguished on type level.

____
* In the implicit[F]-scope every occurrence of a symbol having or returning type F[T] which is defined outside of this implicit[F]-scope should be processed by eval unless it won't typecheck. Symbols defined inside the scope are to be left alone.
Usual implicit conversions are lazy (i.e. being applied only if the expression won't typecheck otherwise), implicit autounboxing is applied eargerly (evaluation is performed unless the expression wouldn't typecheck).
akuklev: (Default)
As Scala macros will become a major feature in upcoming 2.10 release, we would like to introduce a minor addition improving macros usability and recovering qualified substitutionality* which is otherwise lost in presence of macros.

We propose a syntax for digressions – pieces of code marked for evaluation in the outer scope. Their main purpose is exclusion of a piece of code from macro processing, but they also appear practical in some cases unrelated to macros.

// Proposed syntax:

f {... <} expr {> ...}


// Proposed translation:

lazy val v = expr

f {... v ...}

The proposed syntax would not compromise backward compatibility since the digraphs “<}” and “{>” proposed as opening and closing digression brackets are invalid character combinations in present Scala.

Use cases

[Дорогие товарищи, давайте сюда напридумываем хороших примеров!]

_____
* Qualified substitutionality is a property of pure code that all named vals can be eliminated by substituting their occurrences by their (properly amended) definitions.
akuklev: (Default)
Меня тут на днях спросили, а зачем в языках программирования (и в особенности в Скале) нужны макросы. Ну так ежу же понятно, для декларативного программирования. Декларативное программирование, это когда вместо того чтобы описывать компьютеру средства достижения цели, описывают цели, а о средствах он догадывается сам.

Давайте я сейчас быстро напишу пример, а дальше всё само понятно:
val (a: Double, b: Double) = solve {

    a + b == 3

    a - b == 1

}

// Now a = 2, b = 1.


Ещё пример из http://infoscience.epfl.ch/record/161283/files/KuncakETAL10Comfusy.pdf, в переводе на наш синтаксис:
val (hours: Int, minutes: Int, seconds: Int) = solve {

  0 <= seconds; seconds < 60

  0 <= minutes; minutes < 60

  hours * 3600 + minutes * 60 + seconds == totalSeconds

}

Это их [livejournal.com profile] xeno_by (главный специалист по Скальным макросам) в комментах к этому посту показал.

Read more... )
Page generated Sep. 21st, 2017 03:50 pm
Powered by Dreamwidth Studios