It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions.
But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation.
There's something about economy of thought and ergonomics.. on a smaller scale, when coffeescript popped up, it radically altered how i wrote javascript, because lambda shorthand and all syntactic conveniences. Made it easier to think, read and rewrite.
Same goes for sml/haskell and lisps (at least to me)
The DSL/language driven approach first creates a notation fitting the problem space directly, then worries about implementing the notation. It's truly empowering. But this is the lisp way. The APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10. So instead of creating a DSL in APL, you design and layout your data very carefully and then everything just falls into place, a bit backwards from the first impression.
One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That is, the process of writing an implementation seems to be synonymous with the process of learning what DSL your problem really needs.
If you can manage to fluidly update your DSL design along the way, it might work, but in my experience the premature assumptions of initial designs end up getting baked in to so much code that it's really painful to migrate.
APL, on the other hand, I have found extremely amenable to updates and rewrites. I mean, even just psychologically, it feels way more sensible to rewrite a couple lines of code versus a couple hundred, and in practice, I find the language to be very amenable for quickly exploring a problem domain with code sketches.
I was playing with Uiua, a stack and array programming languages. It was amazing to solve the Advent of Code's problems with just a few lines of code. And as GP said. Once you got the right form of array, the handful of functions the standard library was sufficient.
That particular quote is from the "Epigrams on Programming" article by Alan J. Perlis, from 1982. Lots of ideas/"Epigrams" from that list are useful, and many languages have implemented lots of them. But some of them aren't so obvious until you've actually put it into practice. Full list can be found here: https://web.archive.org/web/19990117034445/http://www-pu.inf... (the quote in question is item #9)
I think most people haven't experienced the whole "100 functions on 1 data structures instead of 10 on 10" thing themselves, so there is no attempts to bring this to other languages, as you're not aware of it to begin with.
Then the whole static typing hype (that is the current cycle) makes it kind of difficult because static typing kind of tries to force you into the opposite of "1 function you can only use for whatever type you specify in the parameters", although of course traits/interfaces/whatever-your-language-calls-it helps with this somewhat, even if it's still pretty static.
some of us think in those terms and daily have to fight those who want 20 different objects, each 5-10 deep in inheritance, to achieve the same thing.
I wouldn't say 100 functions over one data structure, but e.g. in python I prefer a few data structures like dictionary and array, with 10-30 top level functions that operate over those.
if your requirements are fixed, it's easy to go nuts and design all kinds of object hierarchies - but if your requirements change a lot, I find it much easier to stay close to the original structure of the data that lives in the many files, and operate on those structures.
Good point. Notation matters in how we explore ideas.
Reminds me of Richard Feynman. He started inventing his own math notation as a teenager while learning trigonometry. He didn’t like how sine and cosine were written, so he made up his own symbols to simplify the formulas and reduce clutter. Just to make it all more intuitive for him.
Indeed, historically. But are we not moving into a society where
thought is unwelcome? We build tools to hide underlying notation and
structure, not because it affords abstraction but because its
"efficient". Is there not a tragedy afoot, by which technology, at its
peak, nullifies all its foundations? Those who can do mental
formalism, mathematics, code etc, I doubt we will have any place in a
future society that values only superficial convenience, the
appearance of correctness, and shuns as "slow old throwbacks" those
who reason symbolically, "the hard way" (without AI).
(cue a dozen comments on how "AI actually helps" and amplifies
symbolic human thought processes)
Let's think about how an abstraction can be useful, and then redundant.
Logarithms allow us to simplify a hard problem (multiplying large numbers), into a simpler problem (addition), but the abstraction results in an approximation. It's a good enough approximation for lots of situations, but it's a map, not the territory. You could also solve division, which means you could take decent stabs at powers and roots and voila, once you made that good enough and a bit faster, an engineering and scientific revolution can take place. Marvelous.
For centuries people produced log tables - some so frustratingly inaccurate that Charles Babbage thought of a machine to automate their calculation - and we had slide rules and we made progress.
And then a descendant of Babbage's machine arrived - the calculator, or computer - and we didn't need the abstraction any more. We could quickly type 35325 x 948572 and far faster than any log table lookup, be confident that the answer was exactly 33,508,305,900. And a new revolution is born.
This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you. For a while we had a tool that helped (roughly), and then we got a better tool thanks to that tool. And we might be about to get a better tool again where instead of doing the maths, the tool can use more impressive models of physics and engineering to help us build things.
The metaphor I often use is that these tools don't replace people, they just give them better tools. There will always be a place for being able to work from fundamentals, but most people don't need those fundamentals - you don't need to understand the foundations of how calculus was invented to use it, the same way you don't need to build a toaster from scratch to have breakfast, or how to build your car from base materials to get to the mountains at the weekend.
> This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you.
What tool exactly are you referring to? If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article. There is a reason mathematics is no longer done with long-form prose and instead uses its own, more economical notation that is sufficiently precise as to even be evaluated and analyzed by computers.
Natural languages have a lot of ambiguity, and their grammars allow nonsense to be expressed in them ("colorless green ideas sleep furiously"). Moreover two people can read the same word and connect two different senses or ideas to them ("si duo idem faciunt, non est idem").
Practice with expressing thoughts in formal language is essential for actually patterning your thoughts against the structures of logic. You would not say that someone who is completely ignorant of Nihongo understands Japanese culture, and custom, and manner of expression; similarly, you cannot say that someone ignorant of the language of syllogism and modus tollens actually knows how to reason logically.
You can, of course, get a translator - and that is what maybe some people think the LLM can do for you, both with Nihongo, and with programming languages or formal mathematics.
Otherwise, if you already know how to express what you want with sufficient precision, you're going to just express your ideas in the symbolic, formal language itself; you're not going to just randomly throw in some nondeterminism at the end by leaving the output up to the caprice of some statistical model, or allow something to get "lost in translation."
You need to see the comment I was replying to, in order to understand the context I was making.
LLMs are part of what I was thinking of, but not the totality.
We're pretty close to Generative AI - and by that I don't just mean LLMs, but the entire space - being able to use formal notations and abstractions more usefully and correctly, and therefore improve reasoning.
The comment I was replying to complained about this shifting value away from fundamentals and this being a tragedy. My point is that this is just human progress. It's what we do. You buy a microwave, you don't build one yourself. You use a calculator app on your phone, you don't work out the fundamentals of multiplication and division from first principles when you're working out how to split the bill at dinner.
I agree with your general take on all of this, but I'd add that AI will get to the point where it can express "thoughts" in formal language, and then provide appropriate tools to get the job done, and that's fine.
I might not understand Japanese culture without knowledge of Nihongo, but if I'm trying to get across Tokyo in rush hour traffic and don't know how to, do I need to understand Japanese culture, or do I need a tool to help me get my objective done?
If I care deeply about understanding Japanese culture, I will want to dive deep. And I should. But for many people, that's not their thing, and we can't all dive deep on everything, so having tools that do that for us better than existing tools is useful. That's my point: abstractions and tools allow people to get stuff done that ultimately leads to better tools and better abstractions, and so on. Complaining that people don't have a first principle grasp of everything isn't useful.
> But are we not moving into a society where thought is unwelcome?
Not really, no. If anything clear thinking and insight will give an even bigger advantage in a society with pervasive LLM usage. Good prompts don't write themselves.
Historically, speaking what killed off APL (besides the wonky keyboard), was Lotus 123 by IBM and shortly thereafter MS Excel. Engineers, academicians, accountants, and MBAs needed something better than their TI-59 & HP-12C. But the CS community was obsessing about symbolics, AI and LISP, so the industry stepped in...
This was a very unfortunate coincidence, because APL could have had much bigger impact and solve far more problems than spreadsheets ever will.
As I understand it, Dyalog gives away their compiler, until you put it in production. You can do all your problem solving in it without giving them any money, unless you also put the compiled result in front of your paying customers. If your solution fits a certain subset you can go full bananas and copy it into April and serve from Common Lisp.
The thing is, that APL people are generally very academic. They can absolutely perform engineering tasks very fast and with concise code, but in some hypothetical average software shop, if you start talking about function ranking and Naperian functors your coworkers are going to suspect you might need medical attention. The product manager will quietly pull out their notes about you and start thinking about the cost of replacing you.
This is for several reasons, but the most important one is that the bulk of software development is about inventing a technical somewhat formal language that represents how the customer-users talk and think, and you can't really do that in the Iverson languages. It's easy in Java, which for a long time forced you to tell every method exactly which business words can go in and come out of them. The exampleMethod combines CustomerConceptNo127 from org.customer.marketing and CustomerConceptNo211 from org.customer.financial and results in a CustomerConceptNo3 that the CEO wants to look at regularly.
Can't really do that as easily in APL. You can name data and functions, sure, but once you introduce long winded names and namespaced structuring to map over a foreign organisation into your Iverson code you lose the tersity and elegance. Even in exceptionally sophisticated type systems in the ML family you'll find that developers struggle to do such direct connections between an invented quasilinguistic ontology and an organisation and its processes, and more regularly opt for mathematical or otherwise academic concepts.
It can work in some settings, but you'll need people that can do both the theoretical stuff and keep in mind how it translates to the customer's world, and usually it's good enough to have people that can only do the latter part.
Java, C# are good for these kind of situation where you want to imitate the business jargon, but in a technical form. But programming languages like CL, clojure, and APL have a more elegant and flexible way to describe the same solution. And in the end easier to adapt. Because in the end, the business jargon is very flexible (business objectives and policies is likely to change next quarter). And in Java, rewriting means changing a lot of line of code (easier with the IDE).
The data rarely changes, but you have to put a name on it, and those names are dependent on policies. That's the issue most standard programming languages. In functional and APL, you don't name your data, you just document its shape[0]. Then when your policies are known, you just write them using the functions that can act on each data type (lists, set, hash, primitives, functions,...). Policy changes just means a little bit of reshuffling.
[0]: In the parent example, CustomerConceptNo{127,211,3) are the same data, but with various transformations applied and with different methods to use. In functional languages, you will only have a customer data blob (probably coming from some DB). Then a chain of functions that would pipe out CustomerConceptNo{127,211,3) form when they are are actually need (generally in the interface. But they be composed of the same data structures that the original blob have, so all your base functions do not automatically becomes obsolete.
The Sapir-Whorf hypothesis is similar. I find it most interesting when you turn it upside down - in any less than perfect language there are things that you either cannot think about or are difficult to think about. Are there things that we cannot express and cannot think about in our language?
And the terms "language" and "thought" can be broader than our usual usage. For example do the rules of social interaction determine how we interact? Zeynep Tufekci in "Twitter and Teargas" talks about how twitter affords flash mobs, but not lasting social change.
Do social mechanism like "following" someone or "commenting" or "liking" determine/afford us ways of interacting with each other? Would other mechanisms afford of better collective thinking. Comments below. And be sure to like and follow. :-)
And then there is music. Not the notation, but does music express something that cannot be well expressed in other ways?
After years of looking at APL as some sort of magic I spent sometime earlier this year to learn it. It is amazing how much code you can fit into a tweet using APL. Fun but hard for me to write.
It's not as extreme but I feel similarly every time I write dense numpy code. Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.
For some reason the reality is unintuitive to me - that the other tools would have taken me far longer. All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner.
I think it's like climbing a steeper but much shorter path. It feels like more work but it's actually less. (The point of my rambling here is that I probably ought to learn APL and use it instead.)
>Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.
I think there is also a psychological bias, we feel more "productive" in a more verbose language. Subconsciously at least, we think "programmers produce code" instead of thinking "programmers build systems".
Indeed numpy is essentially just an APL/J with more verbose and less elegant syntax. The core paradigm is very similar, and numpy was directly inspired by the APLs.
I don't know APL, but that has been my thought as well - if APL does not offer much over numpy, I'd argue that the I'd argue that later is much easier to read and reason through.
I thought that too, but after a while the symbols becomes recognizable (just like math symbols) and then it's a pleasure to write if you have completion based on their name (Uiua developer experience with Emacs). The issue with numpy is the intermediate variables you have to use due to using Python.
> All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner.
Very well put!
Your experience aligns with mine as well. In APL, the sheer austerity of architecture means we can't spend time on boilerplate and are forced to immediately confront core domain concerns.
Working that way has gotten me to see code as a direct extension of business, organizational, and market issues. I feel like this has made me much more valuable at work.
> Nevertheless, mathematical notation has serious deficiencies. In particular, it lacks universality, and must be interpreted differently according to the topic, according to the author, and even according to the immediate context.
I personally disagree to the premise of this paper.
I think notation that is separated from visualization and ergonomics of the problem has a high cost. Some academics prefer a notation that hides away a lot of the complexity which can potentially result in "Eureka" realizations, wild equivalences and the like. In some cases, however, it can be obfuscating and be prone to introducing errors. Yet, it's a important tool in communicating a train of thought.
In my opinion, having one standard notation for any domain/ closely related domains is quite stifling of creative, artistic or explorative side of reasoning and problem solving.
This feels like the types programming vs. none typed programming.
There are efforts in math to build "enterprise" reasoning systems. For these it makes sense to have a universal notation system (Lean, Coq, the likes).
But for a personal exploration, it might be better to just jam in whatever.
My personal strife in this space is more on teaching: Taking algebra classes, etc. where the teacher is not consistent nor honest about the personal decision and preference they have on notation - I became significantly better at math when I started studying type theory and theory of mechanical proofs.
I have to admit that consistency and clarity of thought are often not implied just by the choice of notation and have not seen many books and professors putting effort to emphasize on its importance or even introducing it formally. I've seen cases where people use fancy notation to document topics than how they think about it. It drives me nuts, because the way you tell the story, you hide a lot how you arrived there.
This is why I picked so well on the exposition by Terry Tao. It shows how much clarity of thought he has that he understands the importance of notation.
The paper doesn't really explore this concept well, IMHO. However, after a lot of time reading and writing APL applications, I have found that it points at a way of managing complexity radically different from abstraction.
We're inundated with abstraction barriers: APIs, libraries, modules, packages, interfaces, you name it. Consequences of this approach are almost cliché at this point—dizzyingly high abstraction towers, developers as just API-gluers, disconnect from underlying hardware, challenging to reason about performance, _etc._
APL makes it really convenient to take a different tack. Instead of designing abstractions, we can carefully design our data to be easily operated on with simple expressions. Where you would normally see a library function or DSL term, this approach just uses primitives directly:
For example, we can create a hash map of vector values and interred keys with something like
Standard operations are then immediately accessible:
k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92) ⍝ insert values
k{str[⍺] ⍵}⌸v ⍝ pretty print
k v⌿⍨←⊂k≠str⍳⊂'buggy' ⍝ deletion
What I find really nice about this approach is that each expression is no longer a black box, making it really natural to customize expressions for specific needs. For example, insertion in a hashmap would normally need to have code for potentially adding a new key, but above we're making use of a common invariant that we only need to append values to existing keys.
If this were a library API, there would either be an unused code path here, lots of variants on the insertion function, or some sophisticated type inference to do dead code elimination. Those approaches end up leaking non-domain concerns into our codebase. But, by subordinating detail instead of hiding it, we give ourselves access to as much domain-specific detail as necessary, while letting the non-relevant detail sit silently in the background until needed.
Of course, doing things like this in APL ends up demanding a lot of familiarity with the APL expressions, but honestly, I don't think that ends up being much more work than deeply learning the Python ecosystem or anything equivalent. In practice, the individual APL symbols really do fade into the background and you start seeing semantically meaningful phrases instead, similar to how we read English words and phrases atomically and not one letter at a time.
This is infeasible in most languages, but if your language and concise and expressive enough, it becomes possible again to a large degree.
I always think about how Arthur Whitney just really hates scrolling. Let alone 20 open files and chains of "jump to definition". When the whole program fits on page, all that vanishes. You navigate with eye movements.
> k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92) ⍝ insert values
> k{str[⍺] ⍵}⌸v ⍝ pretty print
> k v⌿⍨←⊂k≠str⍳⊂'buggy' ⍝ deletion
I like your funny words. No, really, I should expend some time learning APL.
But your idea deeply resonate with my last weeks struggle.
I have a legacy python code with too much coupling, and every prior attempt to "improve things" went adding more abstraction over a plain wrong data model.
You can't infer, reading the code linearly, what methods mutate their input objects. Some do, some don't. Sometimes the same input argument is returned even without mutation.
I would prefer some magic string that could be analyzed and understood than this sea of indirection with factories returning different calculators that in some instances they don't even share the same interface.
It's quite interesting, and arguably more approachable than the Turing lecture.
In 1979 APL wasn't as weird and fringe as it is today, because programming languages weren't global mass phenomena in the way that they are today, pretty much all of them were weird and fringe. C was rather fresh at the time, and if one squints a bit APL can kind of look like an abstraction that isn't very far from dense C and allows you to program a computer without having to implement pointer juggling over arrays yourself.
many high schools were teaching mathematics with APL! There are quite a few textbooks to learn math with APL [1] or J [2] syntax. Iverson originally wrote APL as a superior syntax for math, the programming implementation came a few years later.
It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions.
But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation.
There's something about economy of thought and ergonomics.. on a smaller scale, when coffeescript popped up, it radically altered how i wrote javascript, because lambda shorthand and all syntactic conveniences. Made it easier to think, read and rewrite.
Same goes for sml/haskell and lisps (at least to me)
> paired with new notation
The DSL/language driven approach first creates a notation fitting the problem space directly, then worries about implementing the notation. It's truly empowering. But this is the lisp way. The APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10. So instead of creating a DSL in APL, you design and layout your data very carefully and then everything just falls into place, a bit backwards from the first impression.
You stole the words from my mouth!
One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That is, the process of writing an implementation seems to be synonymous with the process of learning what DSL your problem really needs.
If you can manage to fluidly update your DSL design along the way, it might work, but in my experience the premature assumptions of initial designs end up getting baked in to so much code that it's really painful to migrate.
APL, on the other hand, I have found extremely amenable to updates and rewrites. I mean, even just psychologically, it feels way more sensible to rewrite a couple lines of code versus a couple hundred, and in practice, I find the language to be very amenable for quickly exploring a problem domain with code sketches.
I was playing with Uiua, a stack and array programming languages. It was amazing to solve the Advent of Code's problems with just a few lines of code. And as GP said. Once you got the right form of array, the handful of functions the standard library was sufficient.
That particular quote is from the "Epigrams on Programming" article by Alan J. Perlis, from 1982. Lots of ideas/"Epigrams" from that list are useful, and many languages have implemented lots of them. But some of them aren't so obvious until you've actually put it into practice. Full list can be found here: https://web.archive.org/web/19990117034445/http://www-pu.inf... (the quote in question is item #9)
I think most people haven't experienced the whole "100 functions on 1 data structures instead of 10 on 10" thing themselves, so there is no attempts to bring this to other languages, as you're not aware of it to begin with.
Then the whole static typing hype (that is the current cycle) makes it kind of difficult because static typing kind of tries to force you into the opposite of "1 function you can only use for whatever type you specify in the parameters", although of course traits/interfaces/whatever-your-language-calls-it helps with this somewhat, even if it's still pretty static.
some of us think in those terms and daily have to fight those who want 20 different objects, each 5-10 deep in inheritance, to achieve the same thing.
I wouldn't say 100 functions over one data structure, but e.g. in python I prefer a few data structures like dictionary and array, with 10-30 top level functions that operate over those.
if your requirements are fixed, it's easy to go nuts and design all kinds of object hierarchies - but if your requirements change a lot, I find it much easier to stay close to the original structure of the data that lives in the many files, and operate on those structures.
Good point. Notation matters in how we explore ideas.
Reminds me of Richard Feynman. He started inventing his own math notation as a teenager while learning trigonometry. He didn’t like how sine and cosine were written, so he made up his own symbols to simplify the formulas and reduce clutter. Just to make it all more intuitive for him.
And he never stopped. Later, he invented entirely new ways to think about physics tied to how he expressed himself, like Feynman diagrams (https://en.wikipedia.org/wiki/Feynman_diagram) and slash notation (https://en.wikipedia.org/wiki/Feynman_slash_notation).
> Notation matters in how we explore ideas.
Indeed, historically. But are we not moving into a society where thought is unwelcome? We build tools to hide underlying notation and structure, not because it affords abstraction but because its "efficient". Is there not a tragedy afoot, by which technology, at its peak, nullifies all its foundations? Those who can do mental formalism, mathematics, code etc, I doubt we will have any place in a future society that values only superficial convenience, the appearance of correctness, and shuns as "slow old throwbacks" those who reason symbolically, "the hard way" (without AI).
(cue a dozen comments on how "AI actually helps" and amplifies symbolic human thought processes)
Let's think about how an abstraction can be useful, and then redundant.
Logarithms allow us to simplify a hard problem (multiplying large numbers), into a simpler problem (addition), but the abstraction results in an approximation. It's a good enough approximation for lots of situations, but it's a map, not the territory. You could also solve division, which means you could take decent stabs at powers and roots and voila, once you made that good enough and a bit faster, an engineering and scientific revolution can take place. Marvelous.
For centuries people produced log tables - some so frustratingly inaccurate that Charles Babbage thought of a machine to automate their calculation - and we had slide rules and we made progress.
And then a descendant of Babbage's machine arrived - the calculator, or computer - and we didn't need the abstraction any more. We could quickly type 35325 x 948572 and far faster than any log table lookup, be confident that the answer was exactly 33,508,305,900. And a new revolution is born.
This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you. For a while we had a tool that helped (roughly), and then we got a better tool thanks to that tool. And we might be about to get a better tool again where instead of doing the maths, the tool can use more impressive models of physics and engineering to help us build things.
The metaphor I often use is that these tools don't replace people, they just give them better tools. There will always be a place for being able to work from fundamentals, but most people don't need those fundamentals - you don't need to understand the foundations of how calculus was invented to use it, the same way you don't need to build a toaster from scratch to have breakfast, or how to build your car from base materials to get to the mountains at the weekend.
> This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you.
What tool exactly are you referring to? If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article. There is a reason mathematics is no longer done with long-form prose and instead uses its own, more economical notation that is sufficiently precise as to even be evaluated and analyzed by computers.
Natural languages have a lot of ambiguity, and their grammars allow nonsense to be expressed in them ("colorless green ideas sleep furiously"). Moreover two people can read the same word and connect two different senses or ideas to them ("si duo idem faciunt, non est idem").
Practice with expressing thoughts in formal language is essential for actually patterning your thoughts against the structures of logic. You would not say that someone who is completely ignorant of Nihongo understands Japanese culture, and custom, and manner of expression; similarly, you cannot say that someone ignorant of the language of syllogism and modus tollens actually knows how to reason logically.
You can, of course, get a translator - and that is what maybe some people think the LLM can do for you, both with Nihongo, and with programming languages or formal mathematics.
Otherwise, if you already know how to express what you want with sufficient precision, you're going to just express your ideas in the symbolic, formal language itself; you're not going to just randomly throw in some nondeterminism at the end by leaving the output up to the caprice of some statistical model, or allow something to get "lost in translation."
You need to see the comment I was replying to, in order to understand the context I was making.
LLMs are part of what I was thinking of, but not the totality.
We're pretty close to Generative AI - and by that I don't just mean LLMs, but the entire space - being able to use formal notations and abstractions more usefully and correctly, and therefore improve reasoning.
The comment I was replying to complained about this shifting value away from fundamentals and this being a tragedy. My point is that this is just human progress. It's what we do. You buy a microwave, you don't build one yourself. You use a calculator app on your phone, you don't work out the fundamentals of multiplication and division from first principles when you're working out how to split the bill at dinner.
I agree with your general take on all of this, but I'd add that AI will get to the point where it can express "thoughts" in formal language, and then provide appropriate tools to get the job done, and that's fine.
I might not understand Japanese culture without knowledge of Nihongo, but if I'm trying to get across Tokyo in rush hour traffic and don't know how to, do I need to understand Japanese culture, or do I need a tool to help me get my objective done?
If I care deeply about understanding Japanese culture, I will want to dive deep. And I should. But for many people, that's not their thing, and we can't all dive deep on everything, so having tools that do that for us better than existing tools is useful. That's my point: abstractions and tools allow people to get stuff done that ultimately leads to better tools and better abstractions, and so on. Complaining that people don't have a first principle grasp of everything isn't useful.
> But are we not moving into a society where thought is unwelcome?
Not really, no. If anything clear thinking and insight will give an even bigger advantage in a society with pervasive LLM usage. Good prompts don't write themselves.
[dead]
Historically, speaking what killed off APL (besides the wonky keyboard), was Lotus 123 by IBM and shortly thereafter MS Excel. Engineers, academicians, accountants, and MBAs needed something better than their TI-59 & HP-12C. But the CS community was obsessing about symbolics, AI and LISP, so the industry stepped in...
This was a very unfortunate coincidence, because APL could have had much bigger impact and solve far more problems than spreadsheets ever will.
APL desperately needs its renaissance. Original vision was hand-written, consistent, and executable math notation. This was never accomplished.
If you are into this, read ahead: https://mlajtos.mu/posts/new-kind-of-paper
As I understand it, Dyalog gives away their compiler, until you put it in production. You can do all your problem solving in it without giving them any money, unless you also put the compiled result in front of your paying customers. If your solution fits a certain subset you can go full bananas and copy it into April and serve from Common Lisp.
The thing is, that APL people are generally very academic. They can absolutely perform engineering tasks very fast and with concise code, but in some hypothetical average software shop, if you start talking about function ranking and Naperian functors your coworkers are going to suspect you might need medical attention. The product manager will quietly pull out their notes about you and start thinking about the cost of replacing you.
This is for several reasons, but the most important one is that the bulk of software development is about inventing a technical somewhat formal language that represents how the customer-users talk and think, and you can't really do that in the Iverson languages. It's easy in Java, which for a long time forced you to tell every method exactly which business words can go in and come out of them. The exampleMethod combines CustomerConceptNo127 from org.customer.marketing and CustomerConceptNo211 from org.customer.financial and results in a CustomerConceptNo3 that the CEO wants to look at regularly.
Can't really do that as easily in APL. You can name data and functions, sure, but once you introduce long winded names and namespaced structuring to map over a foreign organisation into your Iverson code you lose the tersity and elegance. Even in exceptionally sophisticated type systems in the ML family you'll find that developers struggle to do such direct connections between an invented quasilinguistic ontology and an organisation and its processes, and more regularly opt for mathematical or otherwise academic concepts.
It can work in some settings, but you'll need people that can do both the theoretical stuff and keep in mind how it translates to the customer's world, and usually it's good enough to have people that can only do the latter part.
Java, C# are good for these kind of situation where you want to imitate the business jargon, but in a technical form. But programming languages like CL, clojure, and APL have a more elegant and flexible way to describe the same solution. And in the end easier to adapt. Because in the end, the business jargon is very flexible (business objectives and policies is likely to change next quarter). And in Java, rewriting means changing a lot of line of code (easier with the IDE).
The data rarely changes, but you have to put a name on it, and those names are dependent on policies. That's the issue most standard programming languages. In functional and APL, you don't name your data, you just document its shape[0]. Then when your policies are known, you just write them using the functions that can act on each data type (lists, set, hash, primitives, functions,...). Policy changes just means a little bit of reshuffling.
[0]: In the parent example, CustomerConceptNo{127,211,3) are the same data, but with various transformations applied and with different methods to use. In functional languages, you will only have a customer data blob (probably coming from some DB). Then a chain of functions that would pipe out CustomerConceptNo{127,211,3) form when they are are actually need (generally in the interface. But they be composed of the same data structures that the original blob have, so all your base functions do not automatically becomes obsolete.
The base concept is related to other useful ones.
The Sapir-Whorf hypothesis is similar. I find it most interesting when you turn it upside down - in any less than perfect language there are things that you either cannot think about or are difficult to think about. Are there things that we cannot express and cannot think about in our language?
And the terms "language" and "thought" can be broader than our usual usage. For example do the rules of social interaction determine how we interact? Zeynep Tufekci in "Twitter and Teargas" talks about how twitter affords flash mobs, but not lasting social change.
Do social mechanism like "following" someone or "commenting" or "liking" determine/afford us ways of interacting with each other? Would other mechanisms afford of better collective thinking. Comments below. And be sure to like and follow. :-)
And then there is music. Not the notation, but does music express something that cannot be well expressed in other ways?
After years of looking at APL as some sort of magic I spent sometime earlier this year to learn it. It is amazing how much code you can fit into a tweet using APL. Fun but hard for me to write.
It's not as extreme but I feel similarly every time I write dense numpy code. Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.
For some reason the reality is unintuitive to me - that the other tools would have taken me far longer. All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner.
I think it's like climbing a steeper but much shorter path. It feels like more work but it's actually less. (The point of my rambling here is that I probably ought to learn APL and use it instead.)
>Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.
I think there is also a psychological bias, we feel more "productive" in a more verbose language. Subconsciously at least, we think "programmers produce code" instead of thinking "programmers build systems".
Should you ever decide to take that leap, maybe start here:
https://xpqz.github.io/learnapl
(disclosure: author)
I have been reading through your site, working on an APL DSL in Lisp. Excellent work! Thank you.
> It's not as extreme but I feel similarly every time I write dense numpy code.
https://analyzethedatanotthedrivel.org/2018/03/31/numpy-anot...
Indeed numpy is essentially just an APL/J with more verbose and less elegant syntax. The core paradigm is very similar, and numpy was directly inspired by the APLs.
People actually managed to channel the APL hidden under numpy into a full array language implemented on top of it: https://github.com/briangu/klongpy
Time is indeed a flat circle.
I don't know APL, but that has been my thought as well - if APL does not offer much over numpy, I'd argue that the I'd argue that later is much easier to read and reason through.
I thought that too, but after a while the symbols becomes recognizable (just like math symbols) and then it's a pleasure to write if you have completion based on their name (Uiua developer experience with Emacs). The issue with numpy is the intermediate variables you have to use due to using Python.
> All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner.
Very well put!
Your experience aligns with mine as well. In APL, the sheer austerity of architecture means we can't spend time on boilerplate and are forced to immediately confront core domain concerns.
Working that way has gotten me to see code as a direct extension of business, organizational, and market issues. I feel like this has made me much more valuable at work.
Any examples you can share?
I really wish I finished my old Freeform note taking app that complies down to self contained webpages (via SVG).
IMO it was a super cool idea for more technical content that’s common in STEM fields.
Here’s an example from my old chemistry notes:
https://colbyn.github.io/old-school-chem-notes/dev/chemistry...
I see you Show HN post, this is brilliant. https://news.ycombinator.com/item?id=25474335
What system are you using now?
> Nevertheless, mathematical notation has serious deficiencies. In particular, it lacks universality, and must be interpreted differently according to the topic, according to the author, and even according to the immediate context.
I personally disagree to the premise of this paper.
I think notation that is separated from visualization and ergonomics of the problem has a high cost. Some academics prefer a notation that hides away a lot of the complexity which can potentially result in "Eureka" realizations, wild equivalences and the like. In some cases, however, it can be obfuscating and be prone to introducing errors. Yet, it's a important tool in communicating a train of thought.
In my opinion, having one standard notation for any domain/ closely related domains is quite stifling of creative, artistic or explorative side of reasoning and problem solving.
Also, here's an excellent exposition about notation by none other than Terry Tao https://news.ycombinator.com/item?id=23911903
The problem the article is talking about is that those different notations are used for super basic stuff that really do not need any of that.
This feels like the types programming vs. none typed programming.
There are efforts in math to build "enterprise" reasoning systems. For these it makes sense to have a universal notation system (Lean, Coq, the likes).
But for a personal exploration, it might be better to just jam in whatever.
My personal strife in this space is more on teaching: Taking algebra classes, etc. where the teacher is not consistent nor honest about the personal decision and preference they have on notation - I became significantly better at math when I started studying type theory and theory of mechanical proofs.
I have to admit that consistency and clarity of thought are often not implied just by the choice of notation and have not seen many books and professors putting effort to emphasize on its importance or even introducing it formally. I've seen cases where people use fancy notation to document topics than how they think about it. It drives me nuts, because the way you tell the story, you hide a lot how you arrived there.
This is why I picked so well on the exposition by Terry Tao. It shows how much clarity of thought he has that he understands the importance of notation.
> Subordination of detail
The paper doesn't really explore this concept well, IMHO. However, after a lot of time reading and writing APL applications, I have found that it points at a way of managing complexity radically different from abstraction.
We're inundated with abstraction barriers: APIs, libraries, modules, packages, interfaces, you name it. Consequences of this approach are almost cliché at this point—dizzyingly high abstraction towers, developers as just API-gluers, disconnect from underlying hardware, challenging to reason about performance, _etc._
APL makes it really convenient to take a different tack. Instead of designing abstractions, we can carefully design our data to be easily operated on with simple expressions. Where you would normally see a library function or DSL term, this approach just uses primitives directly:
For example, we can create a hash map of vector values and interred keys with something like
Standard operations are then immediately accessible: What I find really nice about this approach is that each expression is no longer a black box, making it really natural to customize expressions for specific needs. For example, insertion in a hashmap would normally need to have code for potentially adding a new key, but above we're making use of a common invariant that we only need to append values to existing keys.If this were a library API, there would either be an unused code path here, lots of variants on the insertion function, or some sophisticated type inference to do dead code elimination. Those approaches end up leaking non-domain concerns into our codebase. But, by subordinating detail instead of hiding it, we give ourselves access to as much domain-specific detail as necessary, while letting the non-relevant detail sit silently in the background until needed.
Of course, doing things like this in APL ends up demanding a lot of familiarity with the APL expressions, but honestly, I don't think that ends up being much more work than deeply learning the Python ecosystem or anything equivalent. In practice, the individual APL symbols really do fade into the background and you start seeing semantically meaningful phrases instead, similar to how we read English words and phrases atomically and not one letter at a time.
To rephrase crudely: "inline everything".
This is infeasible in most languages, but if your language and concise and expressive enough, it becomes possible again to a large degree.
I always think about how Arthur Whitney just really hates scrolling. Let alone 20 open files and chains of "jump to definition". When the whole program fits on page, all that vanishes. You navigate with eye movements.
> k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92) ⍝ insert values > k{str[⍺] ⍵}⌸v ⍝ pretty print > k v⌿⍨←⊂k≠str⍳⊂'buggy' ⍝ deletion
I like your funny words. No, really, I should expend some time learning APL.
But your idea deeply resonate with my last weeks struggle.
I have a legacy python code with too much coupling, and every prior attempt to "improve things" went adding more abstraction over a plain wrong data model.
You can't infer, reading the code linearly, what methods mutate their input objects. Some do, some don't. Sometimes the same input argument is returned even without mutation.
I would prefer some magic string that could be analyzed and understood than this sea of indirection with factories returning different calculators that in some instances they don't even share the same interface.
Sorry for the rant.
Last year The Array Cast republished an interview with Iverson from 1982.
https://www.arraycast.com/episodes/episode92-iverson
It's quite interesting, and arguably more approachable than the Turing lecture.
In 1979 APL wasn't as weird and fringe as it is today, because programming languages weren't global mass phenomena in the way that they are today, pretty much all of them were weird and fringe. C was rather fresh at the time, and if one squints a bit APL can kind of look like an abstraction that isn't very far from dense C and allows you to program a computer without having to implement pointer juggling over arrays yourself.
> In 1979
many high schools were teaching mathematics with APL! There are quite a few textbooks to learn math with APL [1] or J [2] syntax. Iverson originally wrote APL as a superior syntax for math, the programming implementation came a few years later.
[1] https://alexalejandre.com/about/#apl [2] https://code.jsoftware.com/wiki/Books#Math_for_the_Layman
man i always try squishing code into tiny spaces too and then wonder why i'm tired after, but i kinda love those moments when it all just clicks