/r/ProgrammingLanguages
This subreddit is dedicated to the theory, design and implementation of programming languages.
This subreddit is dedicated to the theory, design and implementation of programming languages.
Be nice to each other. Flame wars and rants are not welcomed. Please also put some effort into your post, this isn't Quora.
This subreddit is not the right place to ask questions such as "What language should I use for X", "what language should I learn", "what's your favourite language" and similar questions. Such questions should be posted in /r/AskProgramming or /r/LearnProgramming. It's also not the place for questions one can trivially answer by spending a few minutes using a search engine, such as questions like "What is a monad?".
/r/ProgrammingLanguages
Have any of you ever programmed with Crystal?
The language has GC and compiles AOT with LLVM. The only thing that I find a little off about Crystal is the Ruby-like syntax and OOP (but the language I use now, TypeScript, is also OOP through and through, so it's not a disadvantage). Therefore I'm still considering using Crystal for my compiler because it seems a pretty fast language and I still find it more appealing than Rust.
But maybe Node/Deno is enough in terms of performance. My compiler just needs to be error-free and fast enough to implement the language in itself; hence it's more of a throwaway compiler. lol
So is it worth switching to a language that you have to learn first just for twice the performance (possibly)?
So I've yet to implement visibility modifiers for my classes/functions/properties etc.
The obvious choice would be to use the common public, private and protected terms but I decided to actually think about it for a second. Like, about the conceptual meaning of the terms.
Assuming of course that we want three levels:
accessible to everyone.
accessible to the class hierarchy only.
accessible only to the owner (be that a property in a class, or a class in a "package" etc).
"Public": makes a lot of sense, not much confusion here.
"Private": also pretty clear.
"Protected": Protected? from who? from what? "shared" would make more sense.
One may want another additional level between 2 and 3 - depending on context. "internal" which would be effectively public to everything in the same "package" or "module".
Maybe I'll go with on public, shared and private 🤔
It is cheating a bit because it uses the Lambdapi logical framework. And only the new computations about context-extension (category with families) are shown; the usual computations about lambda calculus are omitted. Comments?
constant symbol
Con : TYPE;
constant symbol
Ty : Con → TYPE;
constant symbol
◇ : Con;
injective symbol
▹ : Π (Γ : Con), Ty Γ → Con;
notation ▹ infix right 90;
constant symbol
Sub : Con → Con → TYPE;
symbol
∘ : Π [Δ Γ Θ], Sub Δ Γ → Sub Θ Δ → Sub Θ Γ;
notation ∘ infix right 80;
rule /* assoc */
$γ ∘ ($δ ∘ $θ) ↪ ($γ ∘ $δ) ∘ $θ;
constant symbol
id : Π [Γ], Sub Γ Γ;
rule /* idr */
$γ ∘ id ↪ $γ
with /* idl */
id ∘ $γ ↪ $γ;
symbol
'ᵀ_ : Π [Γ Δ], Ty Γ → Sub Δ Γ → Ty Δ;
notation 'ᵀ_ infix left 70;
rule /* 'ᵀ_-∘ */
$A 'ᵀ_ $γ 'ᵀ_ $δ ↪ $A 'ᵀ_( $γ ∘ $δ )
with /* 'ᵀ_-id */
$A 'ᵀ_ id ↪ $A;
constant symbol
Tm : Π (Γ : Con), Ty Γ → TYPE;
symbol
'ᵗ_ : Π [Γ A Δ], Tm Γ A → Π (γ : Sub Δ Γ), Tm Δ (A 'ᵀ_ γ);
notation 'ᵗ_ infix left 70;
rule /* 'ᵗ_-∘ */
$a 'ᵗ_ $γ 'ᵗ_ $δ ↪ $a 'ᵗ_( $γ ∘ $δ )
with /* 'ᵗ_-id */
$a 'ᵗ_ id ↪ $a;
injective symbol
ε : Π [Δ], Sub Δ ◇;
rule /* ε-∘ */
ε ∘ $γ ↪ ε
with /* ◇-η */
@ε ◇ ↪ id;
injective symbol
pₓ : Π [Γ A], Sub (Γ ▹ A) Γ;
injective symbol
qₓ : Π [Γ A], Tm (Γ ▹ A) (A 'ᵀ_ pₓ);
injective symbol
&ₓ : Π [Γ Δ A], Π (γ : Sub Δ Γ), Tm Δ (A 'ᵀ_ γ) → Sub Δ (Γ ▹ A);
notation &ₓ infix left 70;
rule /* &ₓ-∘ */
($γ &ₓ $a) ∘ $δ ↪ ($γ ∘ $δ &ₓ ($a 'ᵗ_ $δ));
rule /* ▹-β₁ */
pₓ ∘ ($γ &ₓ $a) ↪ $γ;
rule /* ▹-β₂ */
qₓ 'ᵗ_ ($γ &ₓ $a) ↪ $a;
rule /* ▹-η */
(@&ₓ _ _ $A (@pₓ _ $A) qₓ) ↪ id;
I want to share features that I've added to my language (LIPS Scheme) REPL written in Node.js. If you have a simple REPL, maybe it will inspire you to create a better one.
I don't see a lot of CLI REPLs that have features like this, recently was testing Deno (a JavaScript/TypeScript runtime), that have syntax highlighting. I only know one REPL that have parentheses matching, it's CLISP, but it do this differently (same as I did on Web REPL), where the cursor jumps to the match parenthesis for a split second. But I think it would be much more complex to implement something like this.
I'm not sure if you can add images here, so here is a link to a GIF that show those features:
https://github.com/jcubic/lips/raw/master/assets/screencast.gif?raw=true
Do you use features like this in your REPL?
I plan to write an article how to create a REPL like this in Node.js.
I've been working on a small project trying to implement type inference for a toy language. I am using the Rust library polytype to do this. For the most part, things have been straight forward. I have functions work with let polymorphism, if/else, lists, etc. However, I've hit a wall and stuck trying to figure out how I can handle records.
A record can be created as follows:
let r = {x: 1, y: {z: 1, w: true}};
Records are just structural types that can be nested. The issue arises here (assume 'r' is the record I defined above):
let f = fn(a) {
a.y.w
};
f(r) || true;
The problem is with how I've been defining records in polytype and how field access works. I've been defining records in polytype as follows:
// the record 'r' above would be represented like this
Type::Constructed("record", vec![tp!(int), Type::Constructed("record", vec![tp!(int), tp!(bool)])])
For the field access I've been taking the field and "projecting it" into a record.
Expr::Member { left, receiver } => {
let record_type = type_check(ctx, env, left)?;
// --- receiver handling is ommitted ---- //
// Create a type variable for the field
let field_type = ctx.new_variable();
// Create an expected record type with this field
let expected_record_type = Type::Constructed(
"record",
vec![field_type.clone()],
);
// Unify the inferred type with the expected type
ctx.unify(&record_type, &expected_record_type)
.map_err(|e| {
format!(
"Type error: Record type {} does not match expected type {}.",
record_type, expected_record_type
)
})?;
Ok(field_type)
}
Here lies the problem, the function 'f' doesn't know how many fields there are for record 'a' so when it encounters 'a.y.w', the Expr::Member only projects a single field into the expected record, however when its used in 'f(r)', 'r' has 2 fields as part of 'y', not one. This results in a failure since polytype is can't unify "record(int, record(int, bool))" with "record(record(t1))" where t1 is a type variable. I have very limited knowledge on type theory, I am trying to avoid type annotations for functions, is it possible to address this without function argument annotations?
Any guidance is appreciated!
For memory-safe and fast programming languages, I think one of the most important, and hardest, questions is memory management. For my language (compiled to C), I'm still struggling a bit, and I'm pretty sure I'm not the only one. Right now, my language uses reference counting. This works, but is a bit slow, compared to eg. Rust or C. My current plan is to offer three options:
Reference counting is simple to use, and allows calling a custom "close" method, if needed. Speed is not all that great, and the counter needs some memory. Dealing with cycles: I plan to support weak references later. Right now, the user needs to prevent cycles.
Ownership: each object has one owner. Borrowing is allowed (always mutable for now), but only on the stack (variables, parameters, return values; fields of value types). Only the owner can destroy the object; no borrowing is allowed when destroying. Unlike Rust, I don't want to implement a borrow checker at compile time, but at runtime: if the object is borrowed, the program panics, similar to array-index out of bounds or division by zero. Checking for this can be done in batches. Due to the runtime check, this is a bit slower than in Rust, but I hope not by much (to be tested). Internally, this uses malloc / free for each object.
Arena allocation: object can be created in an arena, using a bump allocator. The arena knows how many objects are alive, and allocation fails if there is no more space. Each object has an owner, borrowing on the stack is possible (as above). Each arena has a counter of live objects, and if that reaches 0, the stack is checked for borrows (this might panic, same as with Ownership), and so the arena can be freed. Pointers are direct pointers; but internally actually two pointers: one to the arena, and one to the object. An alternative would be to use a "arena id" plus an offset within the arena. Or a tagged pointer, but that is not portable. It looks like this is the fastest memory management strategy (my hope is: faster than Rust; but I need to test first), but also the hardest to use efficiently. I'm not quite sure if there are other languages that use this strategy. The main reason why I would like to have this is to offer an option that is faster than Rust. It sounds like this would be useful in e.g. compilers.
Syntax: I'm not quite sure yet. I want to keep it simple. Maybe something like this:
Reference counting
t := new(Tree) # construction; ref count starts at 1; type is 'Tree'
t.left = l # increment ref count of l
t.left = null # decrement t.left
t.parent = p? # weak reference
t = null # decrement
fun get() Tree # return a ref-counted Tree
Ownership
t := own(Tree) # construction; the type of t is 'Tree*'
left = t # transfer ownership
left = &t # borrow
doSomething(left) # using the borrow
fun get() Tree& # returns a borrowed reference
fun get() Tree* # returns a owned tree
Arena
arena := newArena(1_000_000) # 1 MB
t := arena.own(Tree) # construction; the type of t is 'Tree**'
arena(t) # you can get the arena of an object
left = &t # borrow
t = null # decrements the live counter in the arena
arena.reuse() # this checks that there are no borrows on the stack
In addition to the above, a user or library might use "index into array", optionally with a generation. Like Vale. But I think I will not support this strategy in the language itself for now. I think it could be fast, but Arena is likely faster (assuming the some amount of optimization).
How do I parse a multiple assignment statement ?
For example, given the statement a, b, c = 1, 2, 3
, should I parse it as a left-hand side list versus a right-hand side list, or should I desugar it into a series of separate assignment statements, such as a = 1, b = 2, and c = 3
and then handled them separately?
I have finally finished my first version of an IntelliJ plugin for my language and I have to say that it was hard going. I spent countless hours stepping through IntelliJ code in the debugger trying to work out how things worked. It was a lot harder than I initially thought.
How did others who have been down this path find the experience?
So, it's that time of the year again. I am not super persistent, but I tried to do at least few days of r/adventofcode each year for the past 2 years with my language Ryelang. At some point I always decided it was taking too much time, but trying to solve the puzzles that I did each year got me ideas for new core functions, and I usually found some bugs or missing functionalities. This year I've done all 3 days so far ... this is my post about first day for example: https://www.reddit.com/r/adventofcode/comments/1h3vp6n/comment/lzx6czc/
What about you? Are you testing your language with these challenges ... if not, why not? :)
It provides a new programming experience to design complex control flows. It brings elements of visual programming embedded in text interface coupled with powerful type inference so you can create very compact and readable code at the same time.
It's Haskell compatible (since it's technically just eDSL).
Is there an alternative to QEMU which can run user-space apps under Windows? Or should I switch to Linux so that I can use QEMU?
The AEC-to-ARM compiler will have to work rather differently from my AEC-to-WebAssembly and AEC-to-x86 compilers because ARM is entirely a register-based machine. I will either have to implement some register-allocation algorithm or figure out how to keep the stack in the RAM. I don't know much about ARM assembly yet, I will have to study it first.
I started working on a OOP language without keywords called Karo. At this point the whole thing is more a theoretical thing, but I definitely plan to create a standard and a compiler out of it (in fact I already started with one compiling to .NET).
A lot of the keyword-less languages just use a ton of symbols instead, but my goal was to keep the readability as simple as possible.
#import sl::io; // Importing the sl::io type (sl = standard library)
[public]
[static]
aaronJunker.testNamespace::program { // Defining the class `program` in the namespace `aaronJunker.testNamespace`
[public]
[static]
main |args: string[]|: int { // Defining the function `main` with one parameter `args` of type array of `string` that returns `int`
sl::io:out("Hello World"); // Calling the static function (with the `:` operator) of the type `io` in the namespace `sl`
!0; // Returns `0`.
}
}
I agree that the syntax is not very easy to read at first glance, but it is not very complicated. What might not be easy to decypher are the things between square brackets; These are attributes. Instead of keyword modifiers like in other languages (like public and static) you use types/classes just like in C#.
For example internally public is defined like this:
[public]
[static]
[implements<sl.attributes::attribute>]
sl.attributes::public { }
...return a value
You use the ! statement to return values.
returnNumber3 ||: int {
!3;
}
...use statments like if or else
Other than in common languages, Karo has no constructs like if, else, while, ..., all these things are functions.
But then how is this possible?:
age: int = 19
if (age >= 18) {
sl::io:out("You're an adult");
} -> elseIf (age < 3) {
sl::io:out("You're a toddler");
} -> else() {
sl::io:out("You're still a kid");
}
This is possible cause the if function has the construct attribute, which enables passing the function definition that comes after the function call to be passed as the last argument. Here the simplified definitions of these functions (What -> above and <- below mean is explained later):
[construct]
[static]
if |condition: bool, then: function<void>|: bool { } // If `condition` is `true` the function `then` is executed. The result of the condition is returned
[construct]
[static]
elseIf |precondition: <-bool, condition: bool, then: function<void>|: bool { // If `precondition` is `false` and `condition` is `true` the function `then` is executed. The result of the condition is returned
if (!precondition && condition) {
then();
}
!condition;
}
[construct]
[static]
else |precondition: <-bool, then: function<void>|: void { // If `precondition` is `false` the function `then` is executed.
if (!precondition) {
then();
}
}
This also works for while and foreach loops.
...access the object (when this is not available)
Same as in Python; the first argument can get passed the object itsself, the type declaration will just be an exclamation mark.
[public]
name: string;
[public]
setName |self: !, name: string| {
= name;
}self.name
...create a new object
Just use parantheses like calling a function to initiate a new object.
animals::dog {
[public]
[constructor]
|self: !, name: string| {
= name;
}
[private]
name: string;
[public]
getName |self: !|: string {
!self.name;
}
}
barney: animals::dog = animals::dog("barney");
sl::io:out(barney.getName()); // "barney"self.name
Type constraints
Type definitions can be constrained by its properties by putting constraints between single quotes.
// Defines a string that has to be longer then 10 characters
constrainedString: string'length > 10';
// An array of maximum 10 items with integers between 10 and 12
constrainedArray: array<int'value >= 10 && value <= 12'>'length < 10'
Pipes
Normally only functional programming languages have pipes, but Karo has them too. With the pipe operator: ->. It transfers the result of the previous statement to the argument of the function decorated with the receiving pipe operator <-.
An example could look like this:
getName ||: string {
!"Karo";
}
greetPerson |name: <-string|: string {
!"Hello " + name;
}
shoutGreet |greeting: <-string|: void {
sl::io:out(greeting + "!");
}
main |self: !| {
self.getName() -> self.greetPerson() -> shoutGreet(); // Prints out "Hello Karo!"
}
I would love to hear your thoughts on this first design. What did I miss? What should I consider? I'm eager to hear your feedback.
I’m currently trying to design a language, and I am a bit blocked on some features of the language that do not interact well or too verbose. The generic idea is try to combine mixins and aspects together and and enable at least some static typing of it. I'm somewhat unhappy of happens-to-compile 'type checks' for AoP and trying to figure out what could be done here, and considering aspect as a kind of mixin looks like a promising idea. I would like to learn interesting ideas that was already tried in other languages with mixins in there areas below:
I’m interested in papers or language implementations. If you have a good link, please post it in comments
I remember about 8 years ago I was hearing tech companies didn’t seek employees with degrees, because by the time the curriculum was made, and taught, there would have been many more advancements in the field. I’m wondering did this or does this pertain to new high level languages? From what I see in the industry that a cs degree is very necessary to find employment.. Was it individuals that don’t program that put out the narrative that university CS curriculum is outdated? Or was that narrative never factual?
Hi! I'm developing a programming language (Plum) with a custom backend. As part of that, I need to decide on memory layouts. I want my structs to have nice, compact memory layouts.
My problem: I want to store a set of fields (each consisting of a size and alignment) in memory. I want to find an ordering so that the total size is minimal when storing the fields in memory in that order (with adequate padding in between so that all fields are aligned).
Unlike some other low-level languages, the size of my data types is not required to be a multiple of the alignment. For example, a "Maybe Int" (Option<i64> in Rust) has a size of 9 bytes, and an alignment of 8 bytes (enums always contain the payload followed by a byte for the tag).
Side note: This means that I need to be more careful when storing multiple values in memory next to each other – in that case, I need to reserve the size rounded up to the alignment for each value. But as this is a high-level language with garbage collection, I only need to do that in one single place, the implementation of the builtin Buffer type.
Naturally, I tried looking at how other languages deal with field reordering.
C: It doesn't reorder fields.
struct Foo {
int8_t a;
int64_t b;
int8_t c;
}
// C layout (24 bytes): a.......bbbbbbbbc.......
// what I want (10 bytes): bbbbbbbbac
Rust: Rust requires sizes to be a multiple of the alignment. That makes ordering really easy (just order the fields according to decreasing alignment), but it introduces unnecessary padding if you nest structs:
struct Foo {
a: i64,
b: char,
}
// Rust layout (16 bytes): aaaaaaaab.......
// what I want (9 bytes): aaaaaaaab
struct Bar {
c: Foo,
d: char,
}
// Rust layout (24 bytes): ccccccccccccccccd....... (note that "c" is 16 bytes)
// what I want (10 bytes): cccccccccd
Zig: Zig is in its very early days. It future-proofs the implementation by saying you can't depend on the layout, but for now, it just uses the C layout as far as I can tell.
LLVM: There are some references to struct field reordering in presentations and documentation, but I couldn't find the code for that in the huge codebase.
Haskell: As a statically typed language with algorithmically-inclined people working on the compiler, I thought they might use something interesting. But it seems like most data structure layouts are primarily pointer-based and word-sizes are the granularity of concern.
Literature: Many papers that refer to layout optimizations tackle advanced concepts like struct splitting according to hot/cold fields, automatic array-of-struct to struct-of-array conversions, etc. Most mention field reordering only as a side note. I assume this is because they usually work on the assumption that size is a multiple of the alignment, so field reordering is trivial, but I'm not sure if that's the reason.
Do you reorder fields in your language? If so, how do you do that?
Sometimes I feel like the problem is NP hard – some related tasks like "what fields do I need to choose to reach some alignment" feels like the knapsack problem. But for a subset of alignments (like 1, 2, 4, and 8), it seems like there should be some algorithm for that.
Brain teaser: Here are some fields that can be laid out without requiring padding:
- a: size 10, alignment 8
- b: size 9, alignment 8
- c: size 12, alignment 2
- d: size 1, alignment 1
- e: size 3, alignment 1
It feels like this is such a fundamental part of languages, surely there must be some people that thought about this problem before. Any help is appreciated.
Solution to the brain teaser: >!bbbbbbbbbeeeccccccccccccaaaaaaaaaad
!<
With some recent improvements the way Pipefish does Golang interop has gone from a shameful hack with limited features to a little technical gem that does everything.
How it works from the user end is nice and simple. You can write the signature of a function in Pipefish, and the body in Go, joined by the golang
keyword as a Pipefish-to-Go converter:
fib(n int) : golang {
a := 0
b := 1
for i := 0; i <= n; i++ {
a, b = b, a + b
}
return a
}
This gives access to the extra speed of Go, and makes it trivial or indeed automatable to turn Pipefish libraries into standard libraries.
To make this nice for everyone we have interop on the type level: we can pass around all the basic types; all the container types (lists, maps, sets, pairs), and lambdas. (The lambdas aren't just to show off, the Go people are into libraries with functions that take functions as arguments. So passing them is important. Returning them was to show off, I can't think why anyone would want to.)
And then the user-defined types in the Pipefish code are known to any Go function that needs to know about them:
newtype
Dragon = struct(name string, color Color, temperature Temperature)
Color = enum RED, GREEN, GOLD, BLACK
Temperature = clone int
def
// Returns the color of the hottest dragon.
dragonFight(x, y Dragon) -> Color : golang {
if x.Temperature >= y.Temperature {
return x.Color
}
return y.Color
}
All this "just works" from the POV of the user.
How it works on the inside
This time I thought I'd give the technical details because the other Gophers would want to see. I think the only thing that could be significantly better than this is if using the plugin
package at all is a local optimum and there's an overall better architecture in which case let me know. (Please, urgently.)
Go has a plugin
package. The way it works is in principle very simple. You tell the Go compiler to compile your code into a .so
file rather than an ordinary executable. You can then point the plugin
package at the .so
file and slurp out any public function (by name) into a Value type which you can then cast to a function type:
plugin, _ := plugin.Open("plugin_name.so")
fooValue, _ := p.Lookup("Foo")
myFooFunction := fooValue.(func(x troz) zort)
myFooFunction
now does the same as Foo
, and as I understand it, does so without overhead, it just is the original function.
(In practice this is rickety as heck and also Google hasn't bothered to spend any of their vast cash on making this supposedly "standard" library work for the Windows version of Go. The discussion on why not includes the comment that it is "mostly a tech demo that for some unholy reason got released as a stable feature of the language". I can't do anything about any of this except maybe send roadkill through the mail to all concerned. However, when using the plugin package I have learned to turn around three times and spit before invoking the juju and it's working out for me.)
Sooo ... all we have to do is take the embedded Go out of the Pipefish script, compile it, and it should run, and then we slurp the Go function out of the plugin, tell the compiler to wrap the Pipefish signatures around it, and Bob's your uncle, right?
Well, not quite. Because for one thing, all the embedded Go is in the bodies of the functions. The signatures are in Pipefish:
// Returns the color of the hottest dragon.
dragonFight(x, y Dragon) -> Color : golang {
if x.Temperature >= y.Temperature {
return x.Color
}
return y.Color
}
So we need to translate the signature into Go. No problem:
func DragonFight(x Dragon, y Dragon) Color {
if x.Temperature >= y.Temperature {
return x.Color
}
return y.Color
}
And of course we're going to have to generate some type declarations. Also easy:
type Temperature int
type Dragon struct {
Name string
Color Color
Temperature Temperature
}
type Color int
const (
RED Color = iota
GREEN
GOLD
BLACK
)
Now our generated code knows about the types. But our runtime doesn't. So what we do is generate code defining a couple of variables:
var PIPEFISH_FUNCTION_CONVERTER = map[string](func(t uint32, v any) any){
"Dragon": func(t uint32, v any) any {return Dragon{v.([]any)[0].(string), v.([]any)[1].(Color), v.([]any)[2].(Temperature)}},
"Color": func(t uint32, v any) any {return Color(v.(int))},
"Temperature": func(t uint32, v any) any {return Temperature(v.(int))},
}
var PIPEFISH_VALUE_CONVERTER = map[string]any{
"Color": (*Color)(nil),
"Temperature": (*Temperature)(nil),
"Dragon": (*Dragon)(nil),
}
Then the Pipefish compiler slurps these in along with the functions, turns them into (a) a map from Pipefish type numbers to the functions (b) a map from Go types to Pipefish type numbers, and stores this data in the VM. This provides it with all the information it needs to translate types.
.so
file and does housekeeping..go
source file.Can I glue all the languages?
Rust, for example, is a nice language. Can I glue it into Pipefish in the same way?
In principle, yes. All I have to do is make the compiler recognize things that say rust {
like it now does things that say golang {
, and write a thing to generate Rust code and compile it, and then another thing to generate a Go plugin that knows how to do FFI with the compiled Rust. Simple. Ish. Of course, there are lots of languages, many of which I don't know (Rust, for example) and so working my way through them all would not be a good use of my time.
However. Suppose I make a languages
folder in the Pipefish app that people can drop .go
files into. For example rust.go.
These files would be converted into .so
files by the Pipefish compiler (people can't just supply ready-compiled .so
files themselves because of version compatibility nonsense) Each such file would contain a standardized set of functions saying how to generate the source code for the target language, how to make object code from the source code, and how to write a .go
file that compiles into a .so
file that can do FFI with the object code in the target language.
So then anyone who wanted to could write a plugin to add another language you could glue into your Pipefish code.
I don't see why it wouldn't work. Perhaps I'm missing something but it seems like it would.
In Miranda, comparison operators can be chained, e.g.
if 0 <= x < 10
desugars in the parser to
if 0 <= x & x < 10
This extends to any length for any comparison operator producing a Bool:
a == b == c < d
is
a == b & b == c & c < d
I like this, as it more closely represents mathematical notation. Are there other programming languages that have this feature?
https://en.wikipedia.org/wiki/Miranda_(programming_language)
Hi everyone. I've recently contemplated the design of a minimalist, higher level Rust-like programming language with the following properties:
Clearly, mutable value semantics requires some way to pass/return-by-reference. There are two possibilities:
With most types in your program being comparably cheap to copy, making a copy rather then using an immutable reference would often simpler and easier to use. However, immutable references still come in handy when dealing with move-only types, especially since putting such types inside containers also infects that container to be move-only, requiring all container types to deal with move-onlyness:
len
or is_empty
on a container type need to use a reference, since we don't want the container to be consumed if it contains a move-only type. Being forced to use an exclusive mutable reference here may pose a problem at the usage site (but maybe it would not be a big deal in practice?)What do you think about having only exclusive mutable references in such a language? What other problems could this cause? Which commonly used programming patterns might be rendered harder or even impossible?
Subtyping is something we are used to in the real world. Can we copy our real world subtyping to our programming languages? Yes (with a bit of fiddling): https://wombatlang.blogspot.com/2024/11/the-case-for-subtypes-real-world-and.html.
How much progress have you made since last time? What new ideas have you stumbled upon, what old ideas have you abandoned? What new projects have you started? What are you working on?
Once again, feel free to share anything you've been working on, old or new, simple or complex, tiny or huge, whether you want to share and discuss it, or simply brag about it - or just about anything you feel like sharing!
The monthly thread is the place for you to engage /r/ProgrammingLanguages on things that you might not have wanted to put up a post for - progress, ideas, maybe even a slick new chair you built in your garage. Share your projects and thoughts on other redditors' ideas, and most importantly, have a great and productive month!