Browse Source

Preview capability

contexts-draft2
Logan McGrath 1 year ago
parent
commit
858f04e77b
  1. 2
      config.ini
  2. 410
      site/_drafts/lessons-from-sterling.md
  3. 167
      site/_drafts/sterling-benchmarks.md
  4. 140
      site/_drafts/sterling-with-memoization.md
  5. 16
      src/Green/Config.hs
  6. 6
      src/Green/Content.hs
  7. 61
      src/Green/Content/Blog.hs
  8. 7
      src/Green/Content/HomePage.hs
  9. 13
      src/Green/Content/Sitemap.hs
  10. 2
      src/Green/Template/Custom/Context.hs

2
config.ini

@ -18,5 +18,5 @@ providerDirectory = site
destinationDirectory = _site
[Debug]
#printItem = index.html
preview = false
rawCss = false

410
site/_drafts/lessons-from-sterling.md

@ -1,410 +0,0 @@
---
title: Lessons from Sterling
author: Logan McGrath
date: 2013-08-05T09:37:00-07:00
comments: false
tags: Sterling, language design
layout: post
---
I've spent the last seven months developing a language called [Sterling][].
Sterling was intended to be an untyped functional scripting language, something
like lazily-evaluated, immutable JavaScript. Last week I decided to shelve
Sterling.
<!--more-->
## How Sterling Worked
Sterling's evaluation model is very simple and I felt it held a lot of promise
because it made the language very flexible. Everything in Sterling is an
expression. Some expressions accept a single argument--these were called
*lambdas*. All expressions also contain sub-expressions, which could be accessed
as *attributes*. With a little sugar, a bag of attributes could be made
self-referencing and thus become an *object*.
```haskell
// An assortment of basic expression types
// a constant expression which takes no arguments
anExpression = 2 + 2
// lambda expressions take only 1 argument
aLambda = (x) -> 2 + x
// function expressions take more than 1 argument
aFunction = (x y) -> x * y
// an object expression with constructor
anObject = (constructorArg) -> object {
madeWith: constructorArg,
}
// an object expression that behaves like a lambda after constructed
invokableObject = (constructorArg) -> object {
madeWith: constructorArg,
invoke: (arg) -> "Made with #{self.madeWith} and invoked with #{arg}",
}
```
Expressions could be built up to carry a high amount of capability. Because
Sterling is untyped, decoration and ducktyping are used heavily to compose ever
more features into expressions.
Sterling was directly inspired by [Lambda Calculus][]. This had an enormous
impact on the design of the language, the largest of which was how the language
executed at runtime. Expressions in Sterling are represented as trees and
leaves. Top-level expressions have names, and they could be inserted into other
expressions by referencing those names.
```haskell
// A recursive named expression looks like this:
fibonacci = (n) -> if n <= 1 then
n
else
fibonacci (n - 1) + fibonacci (n - 2)
end
```
Because each expression was a tree, no expression needed to be executed until
its result was absolutely needed. This lazy execution model allows for very
large, complex expressions to be built in one function then returned to the
outside world to be further processed and executed. Functions could be created
inline and passed as arguments to other functions, or constructed within
functions and returned.
Sterling's tree-based structure naturally supported a prototype-based object
model. To modify an expression tree, the tree needed to create a copy of itself
with any changes to it. All expressions, thus, were effective prototypes. This
also had the benefit of directly supporting immutability and helped to enforce a
functional programming paradigm.
## What Could Have Been
I intended Sterling to be a functional scripting language. In some ways, I was
looking to create a JavaScript reboot that clung closer to JavaScript's
functional roots and would be used for general-purpose scripting.
Sterling's syntax was designed to be very terse, readable, and orthogonal. By
that I mean everything in Sterling should be an expression that can be used
[virtually anywhere for anything][]. Because Sterling was based on lambdas, this
worked particularly well for arguments expressions because arguments could fold
into the function call result on the left:
```haskell
// Consing a list by folding arguments, left-to-write
[] 1 2 3 4
> [1] 2 3 4
> [1, 2] 3 4
> [1, 2, 3] 4
> [1, 2, 3, 4]
```
This folding capability meant that Sterling could support very expressive
programming styles. Any function could be returned as the result of another
function call and continue chaining against arguments. Sterling's terse syntax
also made defining functions very easy:
```haskell
// Some basic functions in Sterling
identity = (x) -> x
selfApply = (x) -> x x
apply = (x y) -> x y
selectFirst = (x y) -> x
selectSecond = (x y) -> y
conditional = (condition) -> if condition.true? then selectFirst else selectSecond end
friday? = say $ conditional (today.is :friday) 'Yay Friday!' 'Awww...'
```
Because Sterling was intended to be immutable, objects would be used to
represent state and carry behavior to return new state resulting from an
operation:
```haskell
// Printing arguments from an immutable list iterator
main = (args) ->
print args.iterator // gets an Iterator
print = (iterator) ->
say unless iterator.empty? then
printNext iterator 0
else
'Empty iterator'
end
printNext = (iterator index) ->
unless iterator.empty? then
"arg #{index} => #{iterator.current}\n" + printNext iterator.tail index.up
end
Iterator = (elements position) -> object {
empty?: position >= elements.length,
head: Iterator elements 0,
current: elements[position],
tail: iterator elements position.up,
}
```
Paul Hammant at one point suggested baking dependency injection
[directly into a language][], and even offered I do this in Sterling. This drove
development of a metadata system in Sterling that could be used to support
metaprogramming and eventually dependency injection.
```haskell
// Meta attributes on expressions
@component { uses: [ :productionDb ] }
@useWhen (runtime -> runtime.env is :production)
Inventory = (db) -> object {
numberOfItems: db.asInt $ db.scalarQuery "SELECT COUNT(*) FROM thingies",
priceCheck: (thingy) -> db.asMoney $ db.scalarQuery "SELECT price FROM
thingies WHERE id = :id" { id: thingy.id },
}
@provides :productionDb
createDb = ...
@fake? true
@component { name: :Inventory }
@useWhen (runtime -> runtime.env is :development)
FakeInventory = object -> {
numberOfItems: 0,
priceCheck: (thingy) -> thingy.price,
}
```
The metadata system was very flexible and could support arbitrary meta
annotations. The above metadata translates to the following map structures at
runtime:
```haskell
// What meta attributes look like if they were JavaScript
Inventory.meta = {
"component": {
"uses": [ "productionDb" ]
},
"useWhen": {
"value": function (runtime) {
return runtime["env"] == "production";
}
}
};
createDb.meta = {
"provides": {
"value": "productionDb",
}
};
FakeInventory.meta = {
"fake?": {
"value": true
},
"component": {
"name": "Inventory"
},
"useWhen": {
"value": function (runtime) {
return runtime["env"] == "development";
}
}
};
```
I felt these functional features and expressive syntax would make for an
enjoyable and productive programming experience. The meta system in particular I
felt could become quite powerful especially for customizing load-time behavior
of Sterling programs. However, some of my goals came with a few problems.
## The Problems
### Speed
Sterling is amazingly slow. A natural consequence of a tree-based language is
that trees must be copied and modified for many operations, no matter how
"trivial" they may be (integer arithmetic, for example.) Recursive functions
like the `fibonacci` expression above had a particularly nasty characteristic of
building enormous trees that took a lot of time to reduce to single values.
The speed issues in Sterling were partially mitigated using [memoization][].
### Memoization: Blessing But Possibly A Curse
Memoization increased the possibility for static state to hang around in an
application. Applying arguments to an object constructor, for instance, would
return a previously-constructed object. I'm not entirely sure what the total
impact of the "object constructor problem" could have been, as objects are not
mutable, but I didn't like this charateristic nonetheless. Immutability,
however, wasn't entirely true (see "Escaping The Matrix" below).
Named expressions are persistent in memory. If a named expression took a large
argument, or returned a large result, then the total memory cost of a memoizing
expression could become quite high over time.
### The Impacts Of Typelessness
Types are actually quite nice to have, and I began to miss them quite a bit the
more I worked on Sterling. While Sterling is very flexible (because it has no
types) it also has very poor support for polymorphism (because it has no types).
Want to do something else if you receive an `Asteroid` object rather than a
`Spaceship` object?
The na&iuml;ve solution is to implement an if-case for each expected type:
```haskell
Spaceship = object {
collideWith: (other) ->
if other.meta.name is 'Asteroid' then
say 'Spaceship collided with an asteroid!'
else if other.meta.name is 'Spaceship' then
say 'Spaceships collide!'
end
}
Asteroid = object {
collideWith: (other) ->
if other.meta.name is 'Asteroid' then
say 'Asteroids collide!'
else if other.meta.name is 'Spaceship' then
say 'Asteroid collided with a spaceship!'
end
}
```
This is fragile, though, and the code is complex. What's worse, is there's no
way to ensure that a method is receiving an `Asteroid` and not another object
that simply implements its API. A better solution is to let the colliding
object select the proper method from the object it's colliding with:
```haskell
Spaceship = object {
collideWith: (other) -> other.collidedWithSpaceship self,
collideWithSpaceship: (spaceship) -> say 'Spaceships collide!',
collideWithAsteroid: (asteroid) -> say 'Spaceship collided with an asteroid!',
}
Asteroid = object {
collideWith: (other) -> other.collideWithAsteroid self,
collideWithSpaceship: (spaceship) -> 'Asteroid collided with a spaceship!',
collideWithAsteroid: (asteroid) -> 'Asteroids collide!',
}
```
This solution is better. It's also similar to implementing [visitor pattern] in
Java. I still don't like it because there's no type safety and adding support
for more types requires violating the [open/closed principle][]. For instance,
in order for a `Bunny` to be correctly collided-with, a `collidedWithBunny`
method must be added to both `Spaceship` and `Asteroid`. Developers may find it
easier instead to allow the `Bunny` to masquerade as an asteroid:
```haskell
// Spaceship-eating Bunny
Bunny = object {
collideWith: (other) -> other.collideWithAsteroid self, // muahaha I'm an asteroid!
collidedWithSpaceship: (spaceship) -> say 'NOM NOM NOM NOM!',
collidedWithAsteroid: (asteroid) -> ...
}
```
This [single-dispatch behavior][] means that for any argument applied to a
method name, the same method will be dispatched. In the case of Java, this is
determined by the type of a method's arguments at compile time. Adding new
methods for similarly-typed arguments requires all client code be recompiled.
While Sterling may not have typing, it is still single-dispatch.
The lack of types became particularly painful when implementing arithmetic
operations and compile-time analysis was nearly impossible without collecting a
great deal of superfluous metadata.
### Escaping The Matrix
As I worked on Sterling, I required functionality that wasn't yet directly
supportable in the language itself. I solved this problem using the "glue"
expression that could tie into a Java-based expression:
```haskell
EmptyIterator = glue 'sterling.lang.builtin.EmptyIterator'
List = glue 'sterling.lang.builtin.ListConstructor'
Set = glue 'sterling.lang.builtin.SetConstructor'
Tuple = glue 'sterling.lang.builtin.TupleConstructor'
Map = glue 'sterling.lang.builtin.MapConstructor'
```
For short-term problems, this option isn't too bad, but it allows the programmer
to escape the immutable "Matrix" of Sterling. For example, I implemented
Sterling's collections as thin wrappers around Java collections, and allowed t
hem to be mutable. Actually, a lot of things in Sterling were mutable:
* Method collections on expressions
* Object methods
* Maps
* Lists
This, coupled with memoization, could cause a lot of issues with static state
and had the potential to enable a lot of bad design decisions for programs
written in Sterling.
## The Good Parts
Despite the baggage, there's a few takeaways!
Sterling's syntax is very small and terse. I particularly enjoyed not having to
type a lot of parentheses, braces, commas, and semicolons. Separating arguments
by spaces allowed the language read like a book.
Most expressions can be delimited with whitespace alone, and because everything
is an expression, objects could be created inline and if-cases could be used as
arguments.
Operators are just methods. Any object or expression can define a "+" operator
and customize what it does. With polymorphism supported with multi-methods, this
can become an incredibly powerful feature.
Sterling also has the ability to define arbitrary metadata on any named
expression. This metadata is gathered into a `meta` attribute and can be
inspected at runtime to support a sort of meta programming.
## What I'm Carrying Forward
I'm now working on a new language project that will be borrowing Sterling's
syntax. This time, however, I will be using types. Algebraic data types hold a
certain fascination for me, and I'm interested in seeing what I can do with
them. At the very least, I do intend on using multi-methods for better
polymorphism support.
I don't think I like declaring scope. It's verbose. Or declaring types. That
should be restricted to locations where it affects interfaces, like function
signatures.
While Sterling's meta system didn't really go anywhere, I do intend on carrying
it forward as a supplement to algebraic types. I may even still bake in
dependency injection because I hate all the typing required to tie together an
application.
I don't believe I will carry forward mandatory immutability, though I may
support some form of "immutability by default".
Sterling's lazy evaluation caused a lot of headaches more than a few times.
I'll probably not make any successor language lazily evaluated because
memoization becomes a near requirement in order to make lazy evaluation useful.
## My Holy Grail
* A language that is interpreted and optionally compiled either AOT or JIT
* [Inferred typing][] as opposed to [nominal typing][]
* At least psuedo-declarative
* Dynamic to some degree
* Easy to write, easy to read
* Highly composable
* Simple closures
* First-class functions, if not first-class everything
[Sterling]: https://github.com/lmcgrath/sterling
[Lambda Calculus]: http://en.wikipedia.org/wiki/Lambda_calculus
[Inferred typing]: http://en.wikipedia.org/wiki/Type_inference
[nominal typing]: http://en.wikipedia.org/wiki/Nominative_type_system
[virtually anywhere for anything]: http://brandonbyars.com/2008/07/21/orthogonality/
[directly into a language]: http://paulhammant.com/blog/crazy-bob-and-type-safety-for-dependency-injection.html/
[open/closed principle]: http://en.wikipedia.org/wiki/Open/closed_principle
[memoization]: {{route '_drafts/sterling-with-memoization.md'}}
[visitor pattern]: http://en.wikipedia.org/wiki/Visitor_pattern#Java_example
[single dispatch]: http://en.wikipedia.org/wiki/Multiple_dispatch#Java

167
site/_drafts/sterling-benchmarks.md

@ -1,167 +0,0 @@
---
title: Sterling Benchmarks
date: 2013-06-16T21:12:00-07:00
comments: false
tags: Sterling, language design
layout: post
---
Since [mid January][], I’ve been developing a functional scripting language I
call [Sterling][]. In the past few weeks, Sterling has become nearly usable, but
it doesn’t seem to be very fast. So this weekend, I’ve taking the time to create
a simple (read: na&iuml;ve) benchmark.
<!--more-->
The benchmark uses a [recursive algorithm][] to calculate the Nth member of the
[Fibonacci sequence][]. I’ve implemented both Sterling and Java versions of the
algorithm and I will be benchmarking each for comparison.
```haskell
// Sterling Implementation
fibonacci = n -> if n = 0 then 0
else if n = 1 then 1
else fibonacci (n - 1) + fibonacci (n - 2)
```
```java
// Java Implementation
static int fibonacci(int n) {
if (n == 0) {
return 0;
} else if (n == 1) {
return 1;
} else {
return fibonacci(n - 1) + fibonacci(n - 2);
}
}
```
### Why was the Fibonacci sequence chosen for the benchmark?
The algorithm for calculating the Nth member of the Fibonacci sequence has two
key traits:
* It’s recursive
* It has O(2<sup>n</sup>) complexity
Sterling as of right now performs zero optimizations, so I’m assuming this
algorithm will bring out Sterling’s worst performance characteristics
(muahahaha).
## The benchmark execution plan
I’m using a very basic benchmark excluding Sterling’s compilation overhead and
comparing the results to native Java. I will execute the Fibonacci algorithm 100
times for 10 iterations, providing an average of the time elapsed for each
iteration.
```java
// Benchmark, psuedo-Java
Expression input = IntegerConstant(20);
Expression sterlingFibonacci = load("sterling/math/fibonacci");
void javaBenchmark() {
List<Interval> intervals;
int value = input.getValue();
for (int i : iterations) {
long startTime = currentTimeMillis();
for (int j : executions) {
fibonacci(value);
}
intervals.add(currentTimeMillis() - startTime);
printIteration(i, intervals.last());
}
printAverage(intervals);
}
void sterlingBenchmark() {
List<Interval> intervals;
for (int i : iterations) {
long startTime = currentTimeMillis();
for (int j : executions) {
sterlingFibonacci.apply(input).evaluate();
}
intervals.add(currentTimeMillis() - startTime);
printIteration(i, intervals.last());
}
printAverage(intervals);
}
```
## The benchmark results
```bash
Java Benchmark
--------------
Iteration 0: executions = 100; elapsed = 4 milliseconds
Iteration 1: executions = 100; elapsed = 4 milliseconds
Iteration 2: executions = 100; elapsed = 4 milliseconds
Iteration 3: executions = 100; elapsed = 4 milliseconds
Iteration 4: executions = 100; elapsed = 4 milliseconds
Iteration 5: executions = 100; elapsed = 4 milliseconds
Iteration 6: executions = 100; elapsed = 4 milliseconds
Iteration 7: executions = 100; elapsed = 4 milliseconds
Iteration 8: executions = 100; elapsed = 4 milliseconds
Iteration 9: executions = 100; elapsed = 4 milliseconds
--------------
Average for 10 iterations X 100 executions: 4 milliseconds
Sterling Benchmark
------------------
Iteration 0: executions = 100; elapsed = 8,152 milliseconds
Iteration 1: executions = 100; elapsed = 7,834 milliseconds
Iteration 2: executions = 100; elapsed = 7,873 milliseconds
Iteration 3: executions = 100; elapsed = 7,873 milliseconds
Iteration 4: executions = 100; elapsed = 7,910 milliseconds
Iteration 5: executions = 100; elapsed = 7,973 milliseconds
Iteration 6: executions = 100; elapsed = 7,927 milliseconds
Iteration 7: executions = 100; elapsed = 7,793 milliseconds
Iteration 8: executions = 100; elapsed = 7,912 milliseconds
Iteration 9: executions = 100; elapsed = 7,986 milliseconds
------------------
Average for 10 iterations X 100 executions: 7,923 milliseconds
```
### Immediate conclusions:
Sterling is _**REALLY**_ slow!
Sterling executes directly against an abstract syntax tree representing
operations and data. This tree is generally immutable, so the execution is
performed by effectively rewriting the tree to reduce each node into an “atomic”
expression, such as an integer constant or lambda (which can’t be further
reduced without an applied argument).
References to functions are inserted into the tree by copying the function’s
tree into the reference’s node. The function is then evaluated with a given
argument to reduce the tree to a single node. These copy-and-reduce operations
are very costly and are a likely reason for Sterling’s poor performance.
## @TODO
### Memoization
Copying and reducing a function tree for an argument is expensive. These
operations should not need to be performed more than once for any function and
argument pair.
### Bytecode perhaps?
Given the shear amount of recursion and method calls being performed to execute
Sterling, does it make sense to compile the syntax tree into a bytecode that can
be executed in a loop?
## Links
* [Sterling GitHub Project][]
* [Benchmark Code][]
* [Sterling Fibonacci Implementation][]
[mid January]: https://github.com/lmcgrath/sterling/tree/8b58ce4d4b080b353f7870ec0c0c30639fb2fa7b
[Sterling]: https://github.com/lmcgrath/sterling
[recursive algorithm]: http://en.wikipedia.org/wiki/Dynamic_programming#Fibonacci_sequence
[Fibonacci sequence]: http://en.wikipedia.org/wiki/Fibonacci_sequence
[Sterling GitHub Project]: https://github.com/lmcgrath/sterling
[Benchmark Code]: https://github.com/lmcgrath/sterling/blob/post_20130616_sterling_benchmark/src/test/java/sterling/math/FibonacciBenchmarkTest.java
[Sterling Fibonacci Implementation]: https://github.com/lmcgrath/sterling/blob/post_20130616_sterling_benchmark/src/main/resources/sterling/math/_base.ag

140
site/_drafts/sterling-with-memoization.md

@ -1,140 +0,0 @@
---
title: Sterling With Memoization
author: Logan McGrath
date: 2013-06-17T04:26:00-07:00
comments: false
tags: Sterling, language design
layout: post
---
In my [last post][] I wrote about performance in the [Sterling][] programming
language with a basic benchmark. Today I'm ticking off one **@TODO** item:
[Memoization][].
<!--more-->
Sterling now stores the results of each function/argument pair, returning
respective results rather than forcing a recalculation of an already-known
value. I've leveraged the benchmark from the previous post, and the difference
in execution speed is very pronounced:
```bash
Java Benchmark
--------------
Iteration 0: executions = 100; elapsed = 6 milliseconds
Iteration 1: executions = 100; elapsed = 4 milliseconds
Iteration 2: executions = 100; elapsed = 4 milliseconds
Iteration 3: executions = 100; elapsed = 4 milliseconds
Iteration 4: executions = 100; elapsed = 4 milliseconds
Iteration 5: executions = 100; elapsed = 4 milliseconds
Iteration 6: executions = 100; elapsed = 4 milliseconds
Iteration 7: executions = 100; elapsed = 4 milliseconds
Iteration 8: executions = 100; elapsed = 4 milliseconds
Iteration 9: executions = 100; elapsed = 4 milliseconds
--------------
Average for 10 iterations X 100 executions: 4 milliseconds
Sterling Benchmark
------------------
Iteration 0: executions = 100; elapsed = 648 milliseconds
Iteration 1: executions = 100; elapsed = 0 milliseconds
Iteration 2: executions = 100; elapsed = 1 milliseconds
Iteration 3: executions = 100; elapsed = 0 milliseconds
Iteration 4: executions = 100; elapsed = 0 milliseconds
Iteration 5: executions = 100; elapsed = 0 milliseconds
Iteration 6: executions = 100; elapsed = 0 milliseconds
Iteration 7: executions = 100; elapsed = 0 milliseconds
Iteration 8: executions = 100; elapsed = 0 milliseconds
Iteration 9: executions = 100; elapsed = 0 milliseconds
------------------
Average for 10 iterations X 100 executions: 64 milliseconds
```
Sterling without memoization required on average 0.079 seconds to calculate the
20th member of the Fibonacci sequence, but with memoization, the amount of time
shrinks to 0.006 seconds. The time penalty only applies the first time the
function is executed for a given argument, so call times become
near-instantaneous.
## Sterling is faster than Java!
**Not really.** But it is if I fiddle with the benchmark variables a bit (:
By changing the benchmark to execute the Fibonacci function 1000 times for 100
iterations, something interesting happens:
```bash
Java Benchmark
--------------
Iteration 0: executions = 1000; elapsed = 42 milliseconds
Iteration 1: executions = 1000; elapsed = 39 milliseconds
Iteration 2: executions = 1000; elapsed = 38 milliseconds
Iteration 3: executions = 1000; elapsed = 39 milliseconds
Iteration 4: executions = 1000; elapsed = 39 milliseconds
Iteration 5: executions = 1000; elapsed = 39 milliseconds
Iteration 6: executions = 1000; elapsed = 41 milliseconds
Iteration 7: executions = 1000; elapsed = 40 milliseconds
Iteration 8: executions = 1000; elapsed = 38 milliseconds
Iteration 9: executions = 1000; elapsed = 38 milliseconds
...
Iteration 99: executions = 1000; elapsed = 39 milliseconds
--------------
Average for 100 iterations X 1000 executions: 39 milliseconds
Sterling Benchmark
------------------
Iteration 0: executions = 1000; elapsed = 629 milliseconds
Iteration 1: executions = 1000; elapsed = 0 milliseconds
Iteration 2: executions = 1000; elapsed = 0 milliseconds
Iteration 3: executions = 1000; elapsed = 0 milliseconds
Iteration 4: executions = 1000; elapsed = 0 milliseconds
Iteration 5: executions = 1000; elapsed = 0 milliseconds
Iteration 6: executions = 1000; elapsed = 0 milliseconds
Iteration 7: executions = 1000; elapsed = 0 milliseconds
Iteration 8: executions = 1000; elapsed = 1 milliseconds
Iteration 9: executions = 1000; elapsed = 0 milliseconds
...
Iteration 99: executions = 1000; elapsed = 0 milliseconds
------------------
Average for 100 iterations X 1000 executions: 6 milliseconds
```
### This benchmark smells funny
Yes, the performance in this benchmark is very contrived. But this does present
an interesting potential property of applications written in Sterling: If an
application performs a great deal of repeated calculations, it will run faster
over time. A quick glance at the second bench mark will show that Java is
performing the calculation every single time it is called, whereas Sterling only
requires the first call and then it stores the result. This suggests **O(1)**
vs. **O(n)** time complexity in Sterling's favor.
You won't get this sort of performance for a web application because of their
side effect-driven nature, but for number crunching Sterling may very well be a
good idea.
## @TODO
### How does memoization impact memory?
Obviously, those calculated values get stored somewhere, and somewhere means
memory is being used. I should perform another benchmark comparing memory
requirements of the Fibonacci algorithm between pure Java and Sterling.
### What if I don't want memoization for a particular function?
There may be some cases where you want to recalculate a value for a known
argument. For example, if I query a database I shouldn't necessarily expect the
same result each time. Sterling should give an easy way of signalling that a
function should not leverage memoization.
## Links
* [Commit containing memoization changes][]
* [Benchmark showing O(1) complexity][]
[last post]: {{route '_drafts/sterling-benchmarks.md'}}
[Sterling]: https://github.com/lmcgrath/sterling
[Memoization]: https://en.wikipedia.org/wiki/Memoization
[Commit containing memoization changes]: https://github.com/lmcgrath/sterling/commit/7d69d49a911d2d916701fa973e02ffabe82afe9d
[Benchmark showing O(1) complexity]: https://github.com/lmcgrath/sterling/blob/5c879ece28194fdbc36ed5dff2a760d6a38a4033/src/test/java/sterling/math/FibonacciBenchmarkTest.java

16
src/Green/Config.hs

@ -6,9 +6,10 @@ import qualified Data.Text as T
import Green.Common
import Green.Lens
import Hakyll.Core.Configuration as HC
import Hakyll.Core.Identifier.Pattern ((.||.))
data SiteDebug = SiteDebug
{ _debugPrintItem :: Maybe Identifier,
{ _debugPreview :: Bool,
_debugRawCss :: Bool
}
@ -17,7 +18,7 @@ makeLenses ''SiteDebug
defaultSiteDebug :: SiteDebug
defaultSiteDebug =
SiteDebug
{ _debugPrintItem = Nothing,
{ _debugPreview = False,
_debugRawCss = False
}
@ -72,6 +73,13 @@ siteStoreDirectory = siteHakyllConfiguration . storeDirectoryL
siteInMemoryCache :: Lens' SiteConfig Bool
siteInMemoryCache = siteHakyllConfiguration . inMemoryCacheL
sitePostsPattern :: SimpleGetter SiteConfig Pattern
sitePostsPattern = to f
where
f config
| config ^. siteDebug . debugPreview = "_posts/**" .||. "_drafts/**"
| otherwise = "_posts/**"
hasEnvFlag :: String -> [(String, String)] -> Bool
hasEnvFlag f e = isJust (lookup f e)
@ -90,8 +98,8 @@ parseConfigIni env timeLocale time iniText = parseIniFile iniText do
debugSettings <- sectionDef "Debug" defaultSiteDebug do
SiteDebug
<$> configEnvMbOf "printItems" "SITE_PREVIEW" string env
<*> configEnvFlag "rawCss" "SITE_RAW_CSS" False env
<$> configEnvFlag "preview" "DEBUG_PREVIEW" False env
<*> configEnvFlag "rawCss" "DEBUG_RAW_CSS" False env
displayFormat <- section "DisplayFormats" do
SiteDisplayFormat

6
src/Green/Content.hs

@ -28,10 +28,10 @@ content config = do
codeDep <- code
templateDep <- templates
rulesExtraDependencies [codeDep, templateDep] do
blog context
blog config context
feed
homePage context
homePage config context
pages context
robotsTxt context
sitemap context
sitemap config context
brokenLinks

61
src/Green/Content/Blog.hs

@ -8,34 +8,36 @@ where
import Green.Common
import Green.Compiler (loadExistingSnapshots)
import Green.Config
import Green.Route
import Green.Template
import Green.Template.Custom
import qualified Hakyll as H
blog :: Context String -> Rules ()
blog context = do
categories <- buildCategories "_categories/**" makeCategoryId
tags <- buildTags "_posts/**" makeTagId
blog :: SiteConfig -> Context String -> Rules ()
blog config context = do
let postsPattern = config ^. sitePostsPattern
categories <- buildCategories postsPattern makeCategoryId
tags <- buildTags postsPattern makeTagId
blogHome categories tags context
posts context
archives context
blogHome config categories tags context
posts postsPattern context
archives config context
categoriesPages categories context
tagesPages tags context
tagsPages tags context
draftsIndex context
drafts context
drafts config context
blogHome :: Tags -> Tags -> Context String -> Rules ()
blogHome categories tags context =
blogHome :: SiteConfig -> Tags -> Tags -> Context String -> Rules ()
blogHome config categories tags context =
match "blog.html" do
route indexRoute
compile do
categoryCloud <- renderTagCloud categories
tagCloud <- renderTagCloud tags
recentPosts <- recentPostsContext
recentPosts <- recentPostsContext config
let blogContext =
constField "categoryCloud" categoryCloud
<> constField "tagCloud" tagCloud
@ -47,12 +49,12 @@ blogHome categories tags context =
>>= layoutCompiler blogContext
>>= relativizeUrls
archives :: Context String -> Rules ()
archives context = do
archives :: SiteConfig -> Context String -> Rules ()
archives config context = do
match "archives.html" do
route indexRoute
compile do
publishedPosts <- H.recentFirst =<< loadPublishedPosts
publishedPosts <- H.recentFirst =<< loadPublishedPosts config
let archivesContext =
constField "posts" (itemListValue context publishedPosts)
<> postContext
@ -77,9 +79,9 @@ draftsIndex context = do
>>= layoutCompiler draftsContext
>>= relativizeUrls
posts :: Context String -> Rules ()
posts context = do
match "_posts/**" do
posts :: Pattern -> Context String -> Rules ()
posts postsPattern context = do
match postsPattern do
route $
subPrefixRoute "_posts/" "blog/"
`composeRoutes` dateRoute
@ -94,8 +96,8 @@ posts context = do
where
postsContext = postContext <> context
drafts :: Context String -> Rules ()
drafts context = do
drafts :: SiteConfig -> Context String -> Rules ()
drafts config context = do
match "_drafts/**" do
route $
subPrefixRoute "_drafts/" "drafts/"
@ -105,11 +107,14 @@ drafts context = do
compile $
getResourceBody
>>= contentCompiler draftsContext
>>= snapshotCompiler [draftPostsSnapshot]
>>= snapshotCompiler snapshots
>>= layoutCompiler draftsContext
>>= relativizeUrls
where
draftsContext = postContext <> context
snapshots =
draftPostsSnapshot :
[publishedPostsSnapshot | config ^. siteDebug . debugPreview]
categoriesPages :: Tags -> Context String -> Rules ()
categoriesPages categories context =
@ -131,8 +136,8 @@ categoriesPages categories context =
>>= layoutCompiler categoryContext
>>= relativizeUrls
tagesPages :: Tags -> Context String -> Rules ()
tagesPages tags context =
tagsPages :: Tags -> Context String -> Rules ()
tagsPages tags context =
H.tagsRules tags \tag pat -> do
route indexRoute
compile do
@ -157,9 +162,9 @@ postContext =
<> tagLinksField "tagLinks"
<> postHeaderField "postHeader"
recentPostsContext :: Compiler (Context String)
recentPostsContext = do
recentPosts <- fmap (take 5) . H.recentFirst =<< loadPublishedPosts
recentPostsContext :: SiteConfig -> Compiler (Context String)
recentPostsContext config = do
recentPosts <- fmap (take 5) . H.recentFirst =<< loadPublishedPosts config
let latestPost = take 1 recentPosts
previousPosts = drop 1 recentPosts
return $
@ -169,8 +174,8 @@ recentPostsContext = do
teaserContext :: Context String
teaserContext = teaserField "teaser" publishedPostsSnapshot
loadPublishedPosts :: Compiler [Item String]
loadPublishedPosts = loadExistingSnapshots "_posts/**" publishedPostsSnapshot
loadPublishedPosts :: SiteConfig -> Compiler [Item String]
loadPublishedPosts config = loadExistingSnapshots (config ^. sitePostsPattern) publishedPostsSnapshot
loadDraftPosts :: Compiler [Item String]
loadDraftPosts = loadExistingSnapshots "_drafts/**" draftPostsSnapshot

7
src/Green/Content/HomePage.hs

@ -1,16 +1,17 @@
module Green.Content.HomePage (homePage) where
import Green.Common
import Green.Config
import Green.Content.Blog
import Green.Template.Custom
import Hakyll (recentFirst)
homePage :: Context String -> Rules ()
homePage siteContext =
homePage :: SiteConfig -> Context String -> Rules ()
homePage config siteContext =
match "index.html" do
route idRoute
compile do
posts <- fmap (take 5) $ recentFirst =<< loadPublishedPosts
posts <- fmap (take 5) $ recentFirst =<< loadPublishedPosts config
let context =
constField "previousPosts" (itemListValue siteContext posts)
<> teaserField "teaser" publishedPostsSnapshot

13
src/Green/Content/Sitemap.hs

@ -2,23 +2,24 @@ module Green.Content.Sitemap (sitemap) where
import Green.Common
import Green.Compiler (loadExistingSnapshots)
import Green.Config
import Green.Content.Blog (loadPublishedPosts)
import Green.Template
import Hakyll (recentFirst)
sitemap :: Context String -> Rules ()
sitemap siteContext =
sitemap :: SiteConfig -> Context String -> Rules ()
sitemap config siteContext =
match "sitemap.xml" do
route idRoute
compile do
context <- sitemapContext siteContext
context <- sitemapContext config siteContext
getResourceBody
>>= applyAsTemplate context
sitemapContext :: Context String -> Compiler (Context String)
sitemapContext siteContext = do
sitemapContext :: SiteConfig -> Context String -> Compiler (Context String)
sitemapContext config siteContext = do
pages <- concat <$> mapM (`loadExistingSnapshots` "_content") pagePatterns
posts <- recentFirst =<< loadPublishedPosts
posts <- recentFirst =<< loadPublishedPosts config
let context =
forItemField "updated" latestPostPatterns (\_ -> latestPostUpdated posts)
<> constField "pages" (itemListValue context (pages <> posts))

2
src/Green/Template/Custom/Context.hs

@ -25,7 +25,7 @@ customContext config = self
self =
mconcat
[ forItemField "updated" latestPostPatterns \_ -> do
latestPosts <- lift $ recentFirst =<< loadPublishedPosts
latestPosts <- lift $ recentFirst =<< loadPublishedPosts config
latestPostUpdated latestPosts,
trimmedUrlField "url",
includeField "include" "",

Loading…
Cancel
Save