Øredev was a fantastic conference! I can't stress that enough! So why haven't I blogged about it? Well, there's so many blogs already about it (google on "øredev" and "blog") so I don't really know how to contribute more content, just repeat what's already been said. That's why.
Instead, I'm going to write a note on one thing that I've taken with me from a talk at Øredev: the Pomodoro Technique. It's essentially a technique that takes agile to the personal productivity level: working in small and timeboxed iterations (25 minutes), with short breaks (3-5 minutes) between each iteration and longer breaks (15-30 minutes) between 4 iterations in a row. Oh, by the way, an iteration is called a pomodoro, italian for tomato. Why tomato? Because the inventor of the Pomodoro Technique, Francesco Cirillo, used an egg timer formed as a tomato during the early phase developing the technique. Further, each day starts with planning and ends with collecting and visualizing the data collected, ready to be analysed and retrospected.
So, the idea is very simple, but of course there a lot more to it. What you should start with is to read Staffan Nöteberg's Pomodoro Technique in 5 minutes. Actually, when the videos from Øredev get published, you should start there: Staffan Nöteberg did an excellent talk on the Pomodoro Technique, sometimes using hats and dolls to illustrate his points.
There's also a quite large pdf by Francesco Cirillo available, but I haven't had time to read that one yet.
I've just tried out the Pomodoro Technique myself for a couple of days now and some days have contained more pomodoros than others. Basically, I bought an egg timer for 25 SEK (around $3) and started with the fixed timebox part of the technique and logged the results. The second day I started to do some naïve estimation for each task. I also started with post-it notes for tasks, my personal pull system.
If you follow and read the links in this post, you'll see that I don't really do that much of the Pomodoro Technique! That's ok with me, I'm aware of that and that's why the title of this post is "PomodoroButtButtButt" (paraphrasing Jeff Sutherland's ScrumButt). I'm just getting used to the habit though, and making the human beings around me used to it as well. No need to be extreme here..
By the way, I'm still recording my workday in TimeSnapper, now TimeSnapper Professional. But that's a future blog post.
Monday, December 8, 2008
Thursday, November 13, 2008
TimeSnapper against MultiTasking
I have been working at Dotway now, for almost two weeks. When your environment change, there's a good opportunity for changing habits as well. So, I started with the habit of using TimeSnapper every morning. TimeSnapper is a tool that takes a screenshot every 5 seconds or so, and has the ability to "play" the images, as a movie. The movie obviously has a higher image frequency than 5 seconds, so a whole day takes approximately 5-10 minutes to play.
Essentially, you gain the ability to self-monitor - seeing yourself in third person. Early and frequent feedback is a really good thing to have in most areas, like TDD for development or Scrum for projects. In my opinion, TimeSnapper gives the same kind of early and frequent feedback. So, every morning I play the movie of yesterday, write down the activities in time intervals, analyse my behavior, and ask myself the question "What should I do today that make my morning analysis more joyful tomorrow?". Let me explain..
I don't like when I'm forced to write: "08:00-11:00, XX:ed, YY:ed and ZZ:ed" - that's not really informative, i.e., how much time did I spent on XX, compared to YY? But it's not the logging problem in itself that I have problems with, it's that I know that it's bad for productivity to multitask, but still I do it. Without knowing it as well, it seems. Constantly context switching is bad for productivity. So, I should only focus on one task simultaneously to make tomorrow morning a good start at the day.
Look, if you're using GTD (a nice time management methodology), but instead of doing something useful instead read LifeHacker (a nice site/blog) every 10th minute, something is utterly wrong! Not really getting things done, are you? Though, it can be hard to see for yourself. TimeSnapper lets you visualize your behaviour at the computer, putting your (potentially) multi-tasking in an embarrasingly bright light to yourself.
Thanks to Scott Hanselman for making the tools list where I found TimeSnapper.
As a side note, I really like 43folders' new direction. Or, at least, this particular post.
Essentially, you gain the ability to self-monitor - seeing yourself in third person. Early and frequent feedback is a really good thing to have in most areas, like TDD for development or Scrum for projects. In my opinion, TimeSnapper gives the same kind of early and frequent feedback. So, every morning I play the movie of yesterday, write down the activities in time intervals, analyse my behavior, and ask myself the question "What should I do today that make my morning analysis more joyful tomorrow?". Let me explain..
I don't like when I'm forced to write: "08:00-11:00, XX:ed, YY:ed and ZZ:ed" - that's not really informative, i.e., how much time did I spent on XX, compared to YY? But it's not the logging problem in itself that I have problems with, it's that I know that it's bad for productivity to multitask, but still I do it. Without knowing it as well, it seems. Constantly context switching is bad for productivity. So, I should only focus on one task simultaneously to make tomorrow morning a good start at the day.
Look, if you're using GTD (a nice time management methodology), but instead of doing something useful instead read LifeHacker (a nice site/blog) every 10th minute, something is utterly wrong! Not really getting things done, are you? Though, it can be hard to see for yourself. TimeSnapper lets you visualize your behaviour at the computer, putting your (potentially) multi-tasking in an embarrasingly bright light to yourself.
Thanks to Scott Hanselman for making the tools list where I found TimeSnapper.
As a side note, I really like 43folders' new direction. Or, at least, this particular post.
Thursday, October 30, 2008
Turtle Graphics :: The big refactoring
Last time on "Turtle Graphics", we ended up having the type Turtle -> [Turtle] on functions. The combine function had the type [Turtle] -> (Turtle -> [Turtle]) - Turtle. Let's have some "fun"!
First, we add a helpful parameter to the turtle - penIsDown, i.e., the turtle is writing.
We then add two useful functions for pen modification:
Second, we assume that it would be useful to split the functionality of returning a value and log, pretty much following separation of concerns. We could do this in a tuple, but I prefer having names on things:
This has some implications on our code. All the "core" functions must now return both a turtle value and a singleton log. Oh, and by the way, the Command type changed as well.
We must also change the function for combining functions:
As you see, there's a lot of duplicate code. Let's do an ExtractMethod (sort of):
Still, we could be more generic in our logging type. That is, there is still a restriction on that the value returned and the log have the same type: the log is a list of the same type as the value has. We try to relax this restriction:
Maybe we did too much, logs isn't a list anymore. However, if it's really necessary, we'll find out through type inference. It's not obvious that we really need a list, just something we can append "stuff" to. Anyway, we get some compiler errors now:
`Logged Turtle' is not applied to enough type arguments Expected kind `?', but `Logged Turtle' has kind `k -> *' In the type synonym declaration for `Command'
We "fix" this by removing the Command type synonym and all references to it. I have a feeling that we'll need to fiddle some more with the types, so right now they're only in the way. If the types really are needed, we will find out (by a compiler error)! Though, it could be interesting to see what the type of e.g., "go" is:
Ah, just what I expected, but was too lazy too write. ;) Let's check the combining function:
Hmm, this is rather weird! Before, the function was strongly bound to the Turtle type, which doesn't seem to be the case anymore. Moreover, we see that the input value type (v) doesn't need to be the same as the output value type (v1). How cool is that!?! It would be hard for me to look at the function and calculate the type myself, but Haskell just inferred the most generic type it could find. Coolness!
As always, a design pattern can be hard to spot, especially if you haven't spotted it before. Here's how Gregg Irwin puts it (from an Øredev presentation by Jimmy Nilsson):
Let's try to instantiate the monad class:
We get a compiler error:
We try to remove both type parameters:
..but still an error (yet, another one)
As you might see, we need to bind one of the types, whereas the other one needs to be "free". This puts us in a dilemma, since we know that we have both a type "v" and "v1". Thus, the type of the log must be bound (or, at least given a parametrized name):
So Logged is missing out one type parameter. It's kind of a function over types, that takes a type and returns another type - just as the error message above implied. Though, we get an error again when we try to implement bind:
Now, we actually need to say that the log is a list. Maybe we can remove this requirement in a later blog post, but right now we go with the compiler.
Hey, it works! Or, at least it compiles. But that tends to be synonyms in Haskell ;) We specify "return":
Notice that we can still use |>| as usual:
So, why all this trouble? Well, it wasn't that hard! Essentially, all we did was to make the logging a bit more separated and generic. Then we adjusted the types a bit to make them align with Haskell's monad class. The big win is that our Logger is now reusable if we want to log something else than turtles in the future. So, by adjusting towards a common pattern, we gained both syntactic sugar and reusability.
Of course, there are some ways to improve. I'll perhaps cover this in future posts. Oh, and by the way, our Logger monad is actually the Writer monad. Just thought you should know that.. ;)
Conclusion: Brian Beckman was right, we've invented monads by ourselves, maybe without thinking about it. Or?
Note: Since I'm not a master in category theory, I'm not sure if Logger actually was a "real" monad (strictly speaking), before we changed the order of the type parameters and removed a type parameter in the monad instantiation, making it have the right kind. Any ideas?
First, we add a helpful parameter to the turtle - penIsDown, i.e., the turtle is writing.
data Turtle = Turtle
x :: Double,
y :: Double,
alpha :: Double, -- alpha = 0 means East
penIsDown :: Bool
}
deriving (Show)
We then add two useful functions for pen modification:
penDown t = let t' = t {penIsDown = True} in Logged {value = t', logs = [t']}
penUp t = let t' = t {penIsDown = False} in Logged {value = t', logs = [t']}
Second, we assume that it would be useful to split the functionality of returning a value and log, pretty much following separation of concerns. We could do this in a tuple, but I prefer having names on things:
data Logged l = Logged {
value :: l,
logs :: [l] } deriving (Show)
This has some implications on our code. All the "core" functions must now return both a turtle value and a singleton log. Oh, and by the way, the Command type changed as well.
type Command = Turtle -> Logged Turtle
go, left, right, penDown, penUp :: Command
go t = let t' = t {x = x t + step * cos (alpha t),
y = y t + step * sin (alpha t)
}
in Logged {value = t', logs = [t']}
left = rotate (pi/2)
right = rotate (-pi/2)
penDown t = let t' = t {penIsDown = True} in Logged {value = t', logs = [t']}
penUp t = let t' = t {penIsDown = False} in Logged {value = t', logs = [t']}
We must also change the function for combining functions:
(|>|) :: Logged Turtle -> Command -> Logged Turtle
logged |>| f = let logged' = f (value logged)
in Logged {value = value logged',
logs = logs logged' ++ logs logged}
As you see, there's a lot of duplicate code. Let's do an ExtractMethod (sort of):
logThis val = Logged {value = val, logs = [val]}
go, penDown, penUp :: Command
go t = logThis $ t {x = x t + step * cos (alpha t),
y = y t + step * sin (alpha t) }
penDown t = logThis $ t {penIsDown = True}
penUp t = logThis $ t {penIsDown = False}
rotate :: Double -> Command
rotate v t = logThis $ t {alpha = alpha t + v}
Still, we could be more generic in our logging type. That is, there is still a restriction on that the value returned and the log have the same type: the log is a list of the same type as the value has. We try to relax this restriction:
data Logged v l = Logged {
value :: v,
logs :: l
}
deriving (Show)
Maybe we did too much, logs isn't a list anymore. However, if it's really necessary, we'll find out through type inference. It's not obvious that we really need a list, just something we can append "stuff" to. Anyway, we get some compiler errors now:
`Logged Turtle' is not applied to enough type arguments Expected kind `?', but `Logged Turtle' has kind `k -> *' In the type synonym declaration for `Command'
We "fix" this by removing the Command type synonym and all references to it. I have a feeling that we'll need to fiddle some more with the types, so right now they're only in the way. If the types really are needed, we will find out (by a compiler error)! Though, it could be interesting to see what the type of e.g., "go" is:
*Main> :t go
go :: Turtle -> Logged Turtle [Turtle]
Ah, just what I expected, but was too lazy too write. ;) Let's check the combining function:
*Main> :t (|>|)
(|>|) :: Logged v [a] -> (v -> Logged v1 [a]) -> Logged v1 [a]
Hmm, this is rather weird! Before, the function was strongly bound to the Turtle type, which doesn't seem to be the case anymore. Moreover, we see that the input value type (v) doesn't need to be the same as the output value type (v1). How cool is that!?! It would be hard for me to look at the function and calculate the type myself, but Haskell just inferred the most generic type it could find. Coolness!
As always, a design pattern can be hard to spot, especially if you haven't spotted it before. Here's how Gregg Irwin puts it (from an Øredev presentation by Jimmy Nilsson):
1. You use it without being aware that you’re using itEssentially, our Logger type is a monad. Or, actually, the type of |>| resembles >>=, which is the associative function that composes a particular monad. Since monads are important in Haskell, some syntactic sugar (the do-notation) has been added to make it easier to work with them.
2. You hear about it, read up on it, and tinker a bit
3. You learn more and start using it explicitly, if naively
4. You get the fire and evangelize (optional)
5. Something ”clicks”
6. You learn more and apply it ”less naively”and more implicitly
7. Time passes and you see flaws
8. You question the concept (often because you misapplied it)
9. You either forget about it or add knowledge and experience
(Repeat steps 5-9 if necessary)
10. You use it without being aware that you are using it
Let's try to instantiate the monad class:
instance Monad (Logged a b) where
We get a compiler error:
Kind mis-match
Expected kind `* -> *', but `Logged a b' has kind `*'
In the instance declaration for `Monad (Logged a b)'
We try to remove both type parameters:
instance Monad (Logged) where
..but still an error (yet, another one)
`Logged' is not applied to enough type arguments
Expected kind `* -> *', but `Logged' has kind `* -> * -> *'
In the instance declaration for `Monad (Logged)'
As you might see, we need to bind one of the types, whereas the other one needs to be "free". This puts us in a dilemma, since we know that we have both a type "v" and "v1". Thus, the type of the log must be bound (or, at least given a parametrized name):
data Logged l v = Logged { --notice the different order
value :: v,
logs :: l
}
deriving (Show)
instance Monad (Logged l) where
So Logged is missing out one type parameter. It's kind of a function over types, that takes a type and returns another type - just as the error message above implied. Though, we get an error again when we try to implement bind:
instance Monad (Logged l) where l >>= f = l |>| f
Couldn't match expected type `[a]' against inferred type `l' (a rigid variable)
`l' is bound by the instance declaration at writer5.hs:46:0
Expected type: Logged [a] v
Inferred type: Logged l a1
In the first argument of `(|>|)', namely `l'
In the expression: l |>| f
Now, we actually need to say that the log is a list. Maybe we can remove this requirement in a later blog post, but right now we go with the compiler.
data Logged l v = Logged {
value :: v,
logs :: [l]
}
deriving (Show)
Hey, it works! Or, at least it compiles. But that tends to be synonyms in Haskell ;) We specify "return":
instance Monad (Logged l)
where l >>= f = l |>| f
return val = Logged {value = val, logs = []}
Notice that we can still use |>| as usual:
*Main> start |>| go |>| go
Logged {value = Turtle {x = 2.0, y = 0.0, alpha = 0.0, penIsDown = True},
logs = [Turtle {x = 2.0, y = 0.0, alpha = 0.0, penIsDown = True},
Turtle {x = 1.0, y = 0.0, alpha = 0.0, penIsDown = True},
Turtle {x = 0.0, y = 0.0, alpha = 0.0, penIsDown = True}]}
So, why all this trouble? Well, it wasn't that hard! Essentially, all we did was to make the logging a bit more separated and generic. Then we adjusted the types a bit to make them align with Haskell's monad class. The big win is that our Logger is now reusable if we want to log something else than turtles in the future. So, by adjusting towards a common pattern, we gained both syntactic sugar and reusability.
Of course, there are some ways to improve. I'll perhaps cover this in future posts. Oh, and by the way, our Logger monad is actually the Writer monad. Just thought you should know that.. ;)
Conclusion: Brian Beckman was right, we've invented monads by ourselves, maybe without thinking about it. Or?
Note: Since I'm not a master in category theory, I'm not sure if Logger actually was a "real" monad (strictly speaking), before we changed the order of the type parameters and removed a type parameter in the monad instantiation, making it have the right kind. Any ideas?
Tuesday, October 28, 2008
My first F#, Binary Chop
Yesterday, I really felt like trying out F#. To get some inspiration, I visited PragProgs katas, and chose Kata Two -- Karate Chop. Or, actually, I just implemented a "functional" solution. I tried to make an imperative pointer-based solution (which I might post later, when I have resolved a strange bug). Anyway, I had never coded F# before, so there might be a few places where I could have e.g. chosen a library function instead of implementing it myself. If you have any suggestions or general comments, please post them!
By the way, is there any way to program in literate F#?
I will definitly be posting more F# posts in the future!
Cheers
By the way, is there any way to program in literate F#?
#light
First, a few helper functions for triples.let fst3 (a,b,c) = aDefine the middle index, for simplicity return 0 if list is empty.
let snd3 (a,b,c) = b
let trd3 (a,b,c) = c
let middleIndex xs = if List.is_empty xs then 0 else (List.length xs - 1)/2
Define a function that returns the middle element of a list, and two functions that returns the first/last remaining halfs. Note that there might not be a middle element, so we use the option type.let firstHalf xs = Seq.take (middleIndex xs) xsLet us group the functions in a convenient triple.
let middleElem xs =
match (xs) with
| [] -> None
| xs -> Some (xs.Item (middleIndex xs))
let lastHalf xs =
match xs with
| [] -> Seq.empty
| xs2 -> Seq.skip (middleIndex xs + 1) xs2
let splitInHalf xs = (firstHalf xs, middleElem xs, lastHalf xs)
Now, define a recursive function that 1) splits the list in three parts 2) reuse some definitions 3) return "None" if middle is empty 4) otherwise we can test the middle for equality, if equal then return the current index 5) if not equal, choose appropriate half and recurse.let rec exists x xs i =This is the function a user would call.
let triple = splitInHalf xs in //1
let maybeMiddle = snd3 triple //2
let firstPart = Seq.to_list (fst3 triple)
let lastPart = Seq.to_list (trd3 triple)
if Option.is_none maybeMiddle then None //3
else let middle = Option.get maybeMiddle in //4
if x.Equals middle then Some(i) //5
elif middle > x then exists x firstPart (middleIndex firstPart)
else exists x lastPart ((i+1) + middleIndex lastPart)
let public ex x xs = exists x xs (middleIndex xs)
Some tests, just to check, plus a helper function for equality over options.//TestsNow, I must say that I had a real good time developing this! Ok, some minor things didn't went as smoothly as I'd hope for, but it was actually the first time that I tried F#. I've never had such a good experience with a language the first day of use.
let eq x y = if Option.is_none x then Option.is_none y else x.Equals y
let res = [
ex 3 [];
ex 3 [1];
ex 1 [1];
ex 1 [1;3;5];
ex 3 [1;3;5];
ex 5 [1;3;5];
ex 0 [1;3;5];
ex 2 [1;3;5];
ex 4 [1;3;5];
ex 6 [1;3;5];
ex 1 [1;3;5;7];
ex 3 [1;3;5;7];
ex 5 [1;3;5;7];
ex 7 [1;3;5;7];
ex 0 [1;3;5;7];
ex 2 [1;3;5;7];
ex 4 [1;3;5;7];
ex 6 [1;3;5;7];
ex 8 [1;3;5;7];
]
let answers = [None; None; Some 0;
Some 0; Some 1; Some 2; None; None; None; None;
Some 0; Some 1; Some 2; Some 3; None; None; None; None; None
]
let asserts = Seq.for_all2 eq res answers
asserts
I will definitly be posting more F# posts in the future!
Cheers
Wednesday, October 22, 2008
Category Theory
Guess what landed on my hallway floor today: "Basic Category Theory for Computer Scientists". Thank you Mr Mailman!
There's a lot of "greek" in there! Hopefully, I'll decipher it (and understand it, of course).
There's a lot of "greek" in there! Hopefully, I'll decipher it (and understand it, of course).
Tuesday, October 21, 2008
CoachTV
In the middle of August, David Heinemeier Hansson twittered this:
I think it's hard to analyze yourself from the outside. Reminds me of a quote of Richard Feynman, the 1965 Nobel prize winner in physics:
Note: I actually wrote this post before watching episode #26 of CoachTV, where Lars asks the viewers to "tell our friends" about his show. Just thought you should know that.. :)
Lars Pind is doing video coaching: http://coachtvblog.com/?p=3 -- good thoughts on probability and significance. 4:59 PM Aug 16thSince that date, I have been following Lars Pind's fantastic video blog, CoachTV.
I think it's hard to analyze yourself from the outside. Reminds me of a quote of Richard Feynman, the 1965 Nobel prize winner in physics:
"The first principle is that you must not fool yourself—and you are the easiest person to fool."It's hard to summarize what Pind's message is, so I rather not try. Instead, I want to re-post a comment that I did on one of his episodes. What triggered me to post the comment was that Pind talked about eating and that the next time the viewer ate something, he/she should try to really look at the food, feel the texture, slowly swallow, feel the taste, etc, etc. You get the picture? Obviously, the overall "taste experience" is not only about always eating good food, but rather that it is up to you, if you bother to enjoy it or not. Made me think about music:
Lars, you said that when we eat or drink something, we should try to feel the taste and texture more. That reminded me of the composer John Cage, who had the same opinion about sound. His most famous piece is “4′33″, which is 4 minutes and 33 seconds of silence, written for piano. In summary, Cage had the opinion that there’s music everywhere, but it’s up to us to listen to it. So, even rush hour traffic can be music. Or when it’s so quiet that you can hear our own blood flow and pulse.Could it be that the same reasoning goes for your other sensations and feelings as well? If not, why?
It’s up to ourselves to broaden our senses and perspectives, so that we can enjoy the music in our everyday life, also when sound is not involved per se.
Note: I actually wrote this post before watching episode #26 of CoachTV, where Lars asks the viewers to "tell our friends" about his show. Just thought you should know that.. :)
Monday, October 20, 2008
Podcast: Herding Code
There is a fairly new (May 2008) podcast that I listened to a lot lately (including their old episodes): Herding Code. These episodes are particularly nice:
Enjoy :)
Enjoy :)
Sunday, October 19, 2008
TDD, reflection and the PropertiesEqual extension method
In my last post I wrote about the extension method ForEach, which is a very simple but useful method, at least when it comes to readability.
In this post I'll try to explain another extension method, which I'll call PropertiesEqual. It's purpose is to extend the object class with a method to compare the properties of two objects:
You probably know of the object.Equals method, which per default compares the memory addresses of two objects. That is, two objects are equal only if they are exactly the same object. If you really want to compare the content of two objects, you need to override this function in your class and manually compare them.
When I designed this function, I started with two simple test cases:
1. If two objects of the same class have the same public properties, yield true
2. If two objects of the same class have different public properties, yield false
Translated to code, this becomes:
The implementation is very straight forward:
typeof(T).GetProperties() will return all public properties of T, and property.GetValue(obj, null) will return the value of a given property. The All extension method returns true only if all elements of the sequence satisfy the given condition, i.e. all properties are equal.
The test cases will pass fine, and we now have a simple way of comparing properties of two objects! In the next post I'll try to describe how to extend the method with support for recursive properties (compare properties of a property), and in the one after that I'll write about how to implement an IEqualityComparer based on this method.
EDIT: Note, these series aren't much of the type "here is a revolutionary new technique", but more of "here is how I would write the code for this". My focus is to get you to understand how I think when I design methods, not to tell you that "this is the way"!
In this post I'll try to explain another extension method, which I'll call PropertiesEqual. It's purpose is to extend the object class with a method to compare the properties of two objects:
public static bool PropertiesEqual<T>(this T obj1, T obj2)
{
return true if all properties of obj1 equals all properties of obj2
}
You probably know of the object.Equals method, which per default compares the memory addresses of two objects. That is, two objects are equal only if they are exactly the same object. If you really want to compare the content of two objects, you need to override this function in your class and manually compare them.
When I designed this function, I started with two simple test cases:
1. If two objects of the same class have the same public properties, yield true
2. If two objects of the same class have different public properties, yield false
Translated to code, this becomes:
class TestClass
{
public int A { get; set; }
public int B { get; set; }
}
[TestMethod()]
public void PropertiesEqualTest()
{
Assert.IsTrue(new Test { A = 1, B = 1 }.PropertiesEqual(new Test { A = 1, B = 1 }));
Assert.IsFalse(new Test { A = 1, B = 1 }.PropertiesEqual(new Test { A = 1, B = 0 }));
}
The implementation is very straight forward:
public static bool PropertiesEqual<T>(this T obj1, T obj2)
{
return typeof(T).GetProperties().All(property =>
{
var prop1 = property.GetValue(obj1, null);
var prop2 = property.GetValue(obj2, null);
return prop1.EqualsTo(prop2);
});
}
typeof(T).GetProperties() will return all public properties of T, and property.GetValue(obj, null) will return the value of a given property. The All extension method returns true only if all elements of the sequence satisfy the given condition, i.e. all properties are equal.
The test cases will pass fine, and we now have a simple way of comparing properties of two objects! In the next post I'll try to describe how to extend the method with support for recursive properties (compare properties of a property), and in the one after that I'll write about how to implement an IEqualityComparer based on this method.
EDIT: Note, these series aren't much of the type "here is a revolutionary new technique", but more of "here is how I would write the code for this". My focus is to get you to understand how I think when I design methods, not to tell you that "this is the way"!
Code Kata : Monopoly
Yesterday evening, me and two other friends arranged a highly spontaneus and inofficial code kata, at my friend's apartment. The task was to develop a Monopoly game, using TDD. It was sort of an experiment as well, we wanted to see how TDD could help us discovering design, rather than inventing it.
We started by talking about the domain, listing some words that we thought were important to the game. Then we made a small domain model diagram, just with boxes and lines (the relations had no directions or multiplicities). This was fun! It felt like we were back in school again.. :)
After that, we started to make some user stories, each on a small piece of paper. Here are the stories we came up with (in prioritized order):
An interesting detail is that we kind of get stuck on the "the dices show" part of the first story. It's obviously something that has to do with random number(s), but how should we test that? I.e., we thought that only players would need dices, but if a player use the Random() system method, then we must capture it in a public state in order to know if the player actually walked the number of steps that the dices show. Not nice! We felt the urge to really talk about this, to see if logical arguments could lead us into a good, and hopefully pragmatic, solution to this problem. We ended up with dependency injecting an "IDice" - something that could give us a random number between two and twelwe, and mocking the IDice in the test. Nice! Now we could manipulate the player through fake dices, without having code smells all over the place!
Though, in retrospective, it would have been nice with an expert on TDD in the room. In the end, you would like yourself (or the group) to ask good questions and answer them logically, but without experience, asking the right questions in the right time is hard. A teacher behind the back, mentoring and supporting would be nice to have. Reminds me a little about Polya, "How to Solve It". Though, doing the excercise without a "master" was probably a good idea, in some way: it made us (more) convinced about what we were doing, and if we weren't convinced, we had to talk about it.
We did the excercise for about six hours, but we had dinner and wine during that time as well, and perhaps we weren't that effective all the time :) Anyway, we implemented all the stories, except the last one (a half-baked story).
Thank you for a very nice evening!
We started by talking about the domain, listing some words that we thought were important to the game. Then we made a small domain model diagram, just with boxes and lines (the relations had no directions or multiplicities). This was fun! It felt like we were back in school again.. :)
After that, we started to make some user stories, each on a small piece of paper. Here are the stories we came up with (in prioritized order):
- A player walks the number of steps the dices show.
- In the beginning of the game, the ordering of player moves is determined.
- Players act in the predetermined order (was later removed, redundant with previous).
- A player hits or passes "Go" and earn 4000.
- A player buys the street he/she is on.
An interesting detail is that we kind of get stuck on the "the dices show" part of the first story. It's obviously something that has to do with random number(s), but how should we test that? I.e., we thought that only players would need dices, but if a player use the Random() system method, then we must capture it in a public state in order to know if the player actually walked the number of steps that the dices show. Not nice! We felt the urge to really talk about this, to see if logical arguments could lead us into a good, and hopefully pragmatic, solution to this problem. We ended up with dependency injecting an "IDice" - something that could give us a random number between two and twelwe, and mocking the IDice in the test. Nice! Now we could manipulate the player through fake dices, without having code smells all over the place!
Though, in retrospective, it would have been nice with an expert on TDD in the room. In the end, you would like yourself (or the group) to ask good questions and answer them logically, but without experience, asking the right questions in the right time is hard. A teacher behind the back, mentoring and supporting would be nice to have. Reminds me a little about Polya, "How to Solve It". Though, doing the excercise without a "master" was probably a good idea, in some way: it made us (more) convinced about what we were doing, and if we weren't convinced, we had to talk about it.
We did the excercise for about six hours, but we had dinner and wine during that time as well, and perhaps we weren't that effective all the time :) Anyway, we implemented all the stories, except the last one (a half-baked story).
Thank you for a very nice evening!
Thursday, October 16, 2008
Turtle Graphics :: Changing Interface
Recalling, my first idea of the Turtle Graphics posts was to see if it was likely that a beginner in functional programming would have invented monads by himself/herself.
Let's change the interface a bit, such that a command is a function from log to log.
(I will reuse definitions from the original post, i.e. turtle definition and step)
> type Command = [Turtle] -> [Turtle]
> start = Turtle { x=0, y=0, alpha=0 }
> go, left, right :: Command
> go [] = []
> go (t:ts) = t {x = x t + step * cos (alpha t),
> y = y t + step * sin (alpha t),
> alpha = alpha t} : t : ts
> rotate :: Double -> Command
> rotate v [] = []
> rotate v (t:ts) = t {alpha = alpha t + v} : t : ts
> left = rotate (pi/2)
> right = rotate (-pi/2)
> execute :: [Command] -> Turtle -> [Turtle]
> execute cs l = (foldr (.) id cs) [l]
Basically, the primitive commands have to work on the whole list and consider the case of an empty list, whereas commands defined in terms of other commands don't have that restriction. It's also easy to define new commands:
> go3 = go . go . go
But the primitive commands don't look good at all! But, it's an easy fix..
The problem is of course with the type of Command. A primitive command only cares about the first turtle in the list (the newest one), so why should a command take a whole list as parameter?
> type Command = Turtle -> [Turtle]
> go, left, right :: Command
> go t = return $ t {x = x t + step * cos (alpha t),
> y = y t + step * sin (alpha t),
> alpha = alpha t}
> rotate :: Double -> Command
> rotate v t = return $ t {alpha = alpha t + v}
> left = rotate (pi/2)
> right = rotate (-pi/2)
> -- [Turtle] -> (Turtle -> [Turtle]) -> [Turtle]
> |>| :: [Turtle] -> Command -> [Turtle]
> [] |>| f = []
> (l:ls) |>| f = (f l) ++ (l:ls)
So, for go and rotate, we use a little trick, using return instead of putting the result in a list. What is much more interesting is that we now have a little asymmetry in the type of a command. It is not possible anylonger to use (.) for combining commands, so we define |>| to do that for us.
We must use a little lambda to define new commands..
> go3 = \t -> t |>| go |>| go |>| go
..but that shouldn't be too much of a problem.
We're not done yet, but at least we took care about our little logging problem we had the previous post.
Let's change the interface a bit, such that a command is a function from log to log.
(I will reuse definitions from the original post, i.e. turtle definition and step)
> type Command = [Turtle] -> [Turtle]
> start = Turtle { x=0, y=0, alpha=0 }
> go, left, right :: Command
> go [] = []
> go (t:ts) = t {x = x t + step * cos (alpha t),
> y = y t + step * sin (alpha t),
> alpha = alpha t} : t : ts
> rotate :: Double -> Command
> rotate v [] = []
> rotate v (t:ts) = t {alpha = alpha t + v} : t : ts
> left = rotate (pi/2)
> right = rotate (-pi/2)
> execute :: [Command] -> Turtle -> [Turtle]
> execute cs l = (foldr (.) id cs) [l]
Basically, the primitive commands have to work on the whole list and consider the case of an empty list, whereas commands defined in terms of other commands don't have that restriction. It's also easy to define new commands:
> go3 = go . go . go
But the primitive commands don't look good at all! But, it's an easy fix..
The problem is of course with the type of Command. A primitive command only cares about the first turtle in the list (the newest one), so why should a command take a whole list as parameter?
> type Command = Turtle -> [Turtle]
> go, left, right :: Command
> go t = return $ t {x = x t + step * cos (alpha t),
> y = y t + step * sin (alpha t),
> alpha = alpha t}
> rotate :: Double -> Command
> rotate v t = return $ t {alpha = alpha t + v}
> left = rotate (pi/2)
> right = rotate (-pi/2)
> -- [Turtle] -> (Turtle -> [Turtle]) -> [Turtle]
> |>| :: [Turtle] -> Command -> [Turtle]
> [] |>| f = []
> (l:ls) |>| f = (f l) ++ (l:ls)
So, for go and rotate, we use a little trick, using return instead of putting the result in a list. What is much more interesting is that we now have a little asymmetry in the type of a command. It is not possible anylonger to use (.) for combining commands, so we define |>| to do that for us.
We must use a little lambda to define new commands..
> go3 = \t -> t |>| go |>| go |>| go
..but that shouldn't be too much of a problem.
We're not done yet, but at least we took care about our little logging problem we had the previous post.
The ForEach extension method (C#)
Everyone knows about the foreach keyword, but have you ever noticed there are no equivalent extension method for IEnumerables? I.e. execute "this action" on each element. Perhaps the most simple extension method of them all, which is the reason I'll share it with you :)
For you who aren't familiar with extension methods, what it really says is that, ForEach is a member method of IEnumerable<T>, and can be called by simply writing
For example, to output each item of a list to the console, you could write:
instead of
In this case you might not gain so much in terms of code size, but I think it makes the code more readable. You are reading text from left to right, so it makes sense having the for each statement to the right, right? :)
(There is probably some kind of cool name for this pattern, like "The opposite of law of demeter"... Gustaf probably has more knowledge in this!)
Here is a more complex example of a combination of extension methods:
Where and Select are also extension methods of IEnumerable. Without these you would have to write:
8 lines instead of 1! Impressed? :)
So, extension methods! Learn them, use them, and write your own!
public static class MyExtensionMethods
{
public static void ForEach<T>(this IEnumerable<T> list, Action<T> action)
{
foreach(var item in list) action(item);
}
}
For you who aren't familiar with extension methods, what it really says is that, ForEach is a member method of IEnumerable<T>, and can be called by simply writing
myList.ForEach(myAction)
For example, to output each item of a list to the console, you could write:
myList.ForEach(x => Console.Write(x));
instead of
foreach(var x in in myList) Console.Write(x);
In this case you might not gain so much in terms of code size, but I think it makes the code more readable. You are reading text from left to right, so it makes sense having the for each statement to the right, right? :)
myList.WhereThis().SelectThat().DoThis();
(There is probably some kind of cool name for this pattern, like "The opposite of law of demeter"... Gustaf probably has more knowledge in this!)
Here is a more complex example of a combination of extension methods:
myList.Where(x => x.IsValid()).Select(x => x.ComputeValue()).Where(x => x > 0).ForEach(x => Console.Write(x));
Where and Select are also extension methods of IEnumerable
foreach(var x in myList)
{
if (x.IsValid())
{
var value = x.ComputeValue();
if (value > 0) Console.Write(value);
}
}
8 lines instead of 1! Impressed? :)
So, extension methods! Learn them, use them, and write your own!
Wednesday, October 15, 2008
Some thoughts on Turtle Graphics Basics
In the comments of last turtle post, Joel pointed out a way to get commands composable. I didn't quite get it to work, probably because of that Command is a type synonym and not a concrete type. The first time I saw the proposed solution, to combine commands with <|>, I got a little afraid. I'll explain why..
<Update>
Joel pointed out (in a mail) that Haskell98 doesn't permit instances of type synonyms or lists of types. If the flag "fglasgow-exts" is used, then the problem is solved.
</Update>
The biggest reason I got afraid was that it is still up to the consumer of the code to combine commands in a "good" way, i.e., to log intermediate positions. But, It would be perfectly legal to combine commands with (.) instead of (<|>), making the API quite risky. I now think a bit different..
It would be possible to kind of wrap the commands into a data structure of their own (as the Command pattern in OO):
> data C = C { unC :: Command }
In some way, this is very type-safe, since it will be cumbersome to un-wrap the commands to combine them in a "bad" way..
> go2 = C { (unC go) . (unC go) }
..thus, users are forced to use our special operator for combining commands. But there something I don't like about this. We're kind of forcing the users into a datastructure, just to restrict them. I think we loose a lot of nice functions (i.e. from the prelude) this way. Of course, we could define functions that makes it easier for the user, but the user still has to learn those functions. What I'm trying to say is that it sometimes is bad to invent a whole new API style from scratch, when for example a monadic style API has the benefit that more people know about it and that we can "piggyback" on all the tutorials on monads. Or maybe it's just me that is too focused on monads at the moment. ;)
I guess this could be a nice discussion? What do you think?
Right now, my biggest reason to be afraid of Joels solution is that it uses type classes, which is a very exotic feature, compared to other functional languages. There are obviously nice, but it would be hard to see how a "nice" Turtle Graphics API could be ported to i.e. F#, if we based the API on type classes.
Next: Changing interface on primitive commands
<Update>
Joel pointed out (in a mail) that Haskell98 doesn't permit instances of type synonyms or lists of types. If the flag "fglasgow-exts" is used, then the problem is solved.
</Update>
The biggest reason I got afraid was that it is still up to the consumer of the code to combine commands in a "good" way, i.e., to log intermediate positions. But, It would be perfectly legal to combine commands with (.) instead of (<|>), making the API quite risky. I now think a bit different..
It would be possible to kind of wrap the commands into a data structure of their own (as the Command pattern in OO):
> data C = C { unC :: Command }
In some way, this is very type-safe, since it will be cumbersome to un-wrap the commands to combine them in a "bad" way..
> go2 = C { (unC go) . (unC go) }
..thus, users are forced to use our special operator for combining commands. But there something I don't like about this. We're kind of forcing the users into a datastructure, just to restrict them. I think we loose a lot of nice functions (i.e. from the prelude) this way. Of course, we could define functions that makes it easier for the user, but the user still has to learn those functions. What I'm trying to say is that it sometimes is bad to invent a whole new API style from scratch, when for example a monadic style API has the benefit that more people know about it and that we can "piggyback" on all the tutorials on monads. Or maybe it's just me that is too focused on monads at the moment. ;)
I guess this could be a nice discussion? What do you think?
Right now, my biggest reason to be afraid of Joels solution is that it uses type classes, which is a very exotic feature, compared to other functional languages. There are obviously nice, but it would be hard to see how a "nice" Turtle Graphics API could be ported to i.e. F#, if we based the API on type classes.
Next: Changing interface on primitive commands
Tuesday, October 14, 2008
Joining Dotway!
Last Friday I joined Dotway! Dotway is a .NET consulting company and was founded in Malmö, but I will be working in the Gothenburg office.
Dotway is a Microsoft Gold partner (achieved through competencies) and offers services such as software development, design support, and training/mentoring. Dotway also practices agile and TDD, and "Dotway consultants are Certified ScrumMasters".
I will start working in the beginning of November. Very exciting!
Some links:
Dotway
Mikael Freidlitz
Johan Normén
Magnus Mårtensson
Øredev (a conference and a sister company)
Dotway is a Microsoft Gold partner (achieved through competencies) and offers services such as software development, design support, and training/mentoring. Dotway also practices agile and TDD, and "Dotway consultants are Certified ScrumMasters".
I will start working in the beginning of November. Very exciting!
Some links:
Dotway
Mikael Freidlitz
Johan Normén
Magnus Mårtensson
Øredev (a conference and a sister company)
Monday, October 13, 2008
Turtle Graphics Basics
Some weeks ago, I watched Don't fear the Monads, again. Beckman said something like "there nothing about monads that you don't already know" and "after some time, you'd be inventing monads yourself, but you probably wouldn't know that they were monads". Beckman inspired me to start a small (!) project on Turtle Graphics blog posts.
The idea is to write a very small piece of Haskell code in each post, where every post is a Turtle Graphics implementation. Or, not implementation, but rather just a model, so no real graphics. You can see them as small Turtle Graphics APIs, that focus entirerly on the data and not anything about presenting the data.
Now, not every API is an internal DSL, but for these posts I'll try to put on my language sun-glasses..
First we need a definition of a Turtle:
> type Command = Turtle -> Turtle
Here are the commands, very basic:
Let us just see what the user code looks like:
*Main> let t = Turtle {x = 1, y = 1, alpha = 0}
*Main> t
Turtle {x = 1.0, y = 1.0, alpha = 0.0}
*Main> go t
Turtle {x = 2.0, y = 1.0, alpha = 0.0}
We can combine functions with the operator (.)
*Main> (go . go) t
Turtle {x = 3.0, y = 1.0, alpha = 0.0}
And, of course, we can define new functions in terms of the basic ones:
*Main> let go2 = go . go
*Main> go2 t
Turtle {x = 3.0, y = 1.0, alpha = 0.0}
The presentation is a bit rough, especially for non-zero angles..
*Main> right t
Turtle {x = 1.0, y = 1.0, alpha = -1.5707963267948966}
..but I assume you can calculate from radians to degrees in your head.. ;)
As you might have seen, the ordering of arguments for rotate was important:
*Main> (go . (rotate pi) . go) t
Turtle {x = 1.0, y = 1.0000000000000002, alpha = 3.141592653589793}
If the ordering of arguments were different, the above would not have been possible to write. Instead we would have been forced to write a lambda-expression, which is maybe not the best example of a good user code (if you can avoid it, at least).
Note that we also can remove the definitions of left and right, and replace them with
> left' = rotate (pi/2)
> right' = rotate (-pi/2)
Now, a nice feature would be to let a user give a list of Commands and a Turtle to a function, returning the final position:
> runR :: [Command] -> Turtle -> Turtle
> runR [] t = t
> runR (c:cs) t = runR cs (c t)
Here's where I think it can be useful to know something about functions is general. For example, functions with type a -> a forms a monoid [1,2] together with the operator (.) and identity id. This allows us to write runR in a more readable form (now called run):
> run :: [Command] -> Turtle -> Turtle
> run cs t = (foldr (.) id cs) t
A monoid is a nice thing to be, but what is really the purpose with the run function? Isn't the whole goal with a "Turtle Graphics" that we compute a list of positions, so that we can plot them somewhere? It's no use just to calculate a final position: the picture will become pretty boring eventually.. ;)
What we need is a function that takes a list of commands and a start position, returning a list of positions:
> runHistory :: [Command] -> Turtle -> [Turtle]
We make runHistory a wrapper function around runToList:
> runHistory cs t = runToList cs [t]
> runToList :: [Command] -> [Turtle] -> [Turtle]
> runToList [] ts = ts
> runToList _ [] = []
> runToList (c:cs) (t:ts) = runToList cs ((c t) : t : ts)
<Update>
Joel pointed out (in the comments) that runHistory could be written as:
> runHistory = flip (scanr ($)) . reverse
It's of course a lot better than my definition(s), from a readable code perspective. I imagine that my version is more performant (since I do not reverse), but as we all know, performance only matters until you notice the problem (as I pointed out in the comments myself). So, I fell into the trap of pre-optimizing.. Oh my, oh my.
</Update>
Let's try it out:
*Main> runHistory [go,right,rotate pi, go,go] t
[Turtle {x = 2.0, y = 3.0, alpha = 1.5707963267948966},Turtle {x = 2.0, y = 2.0, alpha = 1.5707963267948966},Turtle {x = 2.0, y = 1.0, alpha = 1.5707963267948966},Turtle {x = 2.0, y = 1.0, alpha = -1.5707963267948966},Turtle {x = 2.0, y = 1.0, alpha = 0.0},Turtle {x = 1.0, y = 1.0, alpha = 0.0}]
Hmm, the presentation gets real ugly. Perhaps an implementation of (Show) would be nice, but that's out of scope for this post.
I think there's some smelly code here. We have "bolted on" a history feature on top of the basic commands, so the only knowledge of a history (or log) is in runHistory itself. From a logging perspective, the basic functions do not compose at all! Let me show you what I mean..
Suppose we define a go2 in terms of go (see above). Clearly, we would like go2 to write to the log twice. With the current implementation, this is impossible! There's no way that runHistory can write more than one time to the log, per command. Well, it could be done, but it's not pretty:
*Main> let go2 = [go,go]
(*Main> let prog = [go,go2,go] -- type error)
*Main> let prog = [[go],go2,[go]]
*Main> runHistory (concat prog) t
Even uglier:
*Main> let go3 = [go2,[go]]
(*Main> let prog = [[go],go3,[go]] -- type error)
*Main> let prog = [[[go]],go3,[[go]]]
*Main> runHistory (concat (concat prog)) t
From this, it's obvious that we need to let the basic functions be aware of history/logging. That would be the goal of future posts.
Comment/spoiler: Kenn Knowles has also written a blog post on a Turtle Graphics implementation. Though, the aim of my post(s) is to start out real basic and then evolve the code through several posts. Basically, Kenn's implementation is a "state-over-writer" monad transformer, with a self-implemented difference list as the writer monoid.
The idea is to write a very small piece of Haskell code in each post, where every post is a Turtle Graphics implementation. Or, not implementation, but rather just a model, so no real graphics. You can see them as small Turtle Graphics APIs, that focus entirerly on the data and not anything about presenting the data.
Now, not every API is an internal DSL, but for these posts I'll try to put on my language sun-glasses..
First we need a definition of a Turtle:
> data Turtle = Turtle {The first idea (the trivial/naïve implementation) is to define functions with the type signatures Turtle -> Turtle. The functions I have in mind is go, left, right, and rotate, hopefully all self-explanatory. Let's do a type synonym, because it's fun:
> x :: Double,
> y :: Double,
> alpha :: Double -- alpha = 0 means East
> }
> deriving (Show)
> type Command = Turtle -> Turtle
Here are the commands, very basic:
> go, left, right :: CommandNote that rotate has a different type. Also note the order of the arguments: we can use currying so that if we give rotate just one argument (an angle), we will get a Command.
> go t = t { x = x t + step * cos (alpha t),
> y = y t + step * sin (alpha t),
> alpha = alpha t }
> left t = t { alpha = alpha t + pi/2 }
> right t = t { alpha = alpha t - pi/2 }
> rotate :: Double -> Command
> rotate v t = t { alpha = alpha t + v }
> step = 1
Let us just see what the user code looks like:
*Main> let t = Turtle {x = 1, y = 1, alpha = 0}
*Main> t
Turtle {x = 1.0, y = 1.0, alpha = 0.0}
*Main> go t
Turtle {x = 2.0, y = 1.0, alpha = 0.0}
We can combine functions with the operator (.)
*Main> (go . go) t
Turtle {x = 3.0, y = 1.0, alpha = 0.0}
And, of course, we can define new functions in terms of the basic ones:
*Main> let go2 = go . go
*Main> go2 t
Turtle {x = 3.0, y = 1.0, alpha = 0.0}
The presentation is a bit rough, especially for non-zero angles..
*Main> right t
Turtle {x = 1.0, y = 1.0, alpha = -1.5707963267948966}
..but I assume you can calculate from radians to degrees in your head.. ;)
As you might have seen, the ordering of arguments for rotate was important:
*Main> (go . (rotate pi) . go) t
Turtle {x = 1.0, y = 1.0000000000000002, alpha = 3.141592653589793}
If the ordering of arguments were different, the above would not have been possible to write. Instead we would have been forced to write a lambda-expression, which is maybe not the best example of a good user code (if you can avoid it, at least).
Note that we also can remove the definitions of left and right, and replace them with
> left' = rotate (pi/2)
> right' = rotate (-pi/2)
Now, a nice feature would be to let a user give a list of Commands and a Turtle to a function, returning the final position:
> runR :: [Command] -> Turtle -> Turtle
> runR [] t = t
> runR (c:cs) t = runR cs (c t)
Here's where I think it can be useful to know something about functions is general. For example, functions with type a -> a forms a monoid [1,2] together with the operator (.) and identity id. This allows us to write runR in a more readable form (now called run):
> run :: [Command] -> Turtle -> Turtle
> run cs t = (foldr (.) id cs) t
A monoid is a nice thing to be, but what is really the purpose with the run function? Isn't the whole goal with a "Turtle Graphics" that we compute a list of positions, so that we can plot them somewhere? It's no use just to calculate a final position: the picture will become pretty boring eventually.. ;)
What we need is a function that takes a list of commands and a start position, returning a list of positions:
> runHistory :: [Command] -> Turtle -> [Turtle]
We make runHistory a wrapper function around runToList:
> runHistory cs t = runToList cs [t]
> runToList :: [Command] -> [Turtle] -> [Turtle]
> runToList [] ts = ts
> runToList _ [] = []
> runToList (c:cs) (t:ts) = runToList cs ((c t) : t : ts)
<Update>
Joel pointed out (in the comments) that runHistory could be written as:
> runHistory = flip (scanr ($)) . reverse
It's of course a lot better than my definition(s), from a readable code perspective. I imagine that my version is more performant (since I do not reverse), but as we all know, performance only matters until you notice the problem (as I pointed out in the comments myself). So, I fell into the trap of pre-optimizing.. Oh my, oh my.
</Update>
Let's try it out:
*Main> runHistory [go,right,rotate pi, go,go] t
[Turtle {x = 2.0, y = 3.0, alpha = 1.5707963267948966},Turtle {x = 2.0, y = 2.0, alpha = 1.5707963267948966},Turtle {x = 2.0, y = 1.0, alpha = 1.5707963267948966},Turtle {x = 2.0, y = 1.0, alpha = -1.5707963267948966},Turtle {x = 2.0, y = 1.0, alpha = 0.0},Turtle {x = 1.0, y = 1.0, alpha = 0.0}]
Hmm, the presentation gets real ugly. Perhaps an implementation of (Show) would be nice, but that's out of scope for this post.
I think there's some smelly code here. We have "bolted on" a history feature on top of the basic commands, so the only knowledge of a history (or log) is in runHistory itself. From a logging perspective, the basic functions do not compose at all! Let me show you what I mean..
Suppose we define a go2 in terms of go (see above). Clearly, we would like go2 to write to the log twice. With the current implementation, this is impossible! There's no way that runHistory can write more than one time to the log, per command. Well, it could be done, but it's not pretty:
*Main> let go2 = [go,go]
(*Main> let prog = [go,go2,go] -- type error)
*Main> let prog = [[go],go2,[go]]
*Main> runHistory (concat prog) t
Even uglier:
*Main> let go3 = [go2,[go]]
(*Main> let prog = [[go],go3,[go]] -- type error)
*Main> let prog = [[[go]],go3,[[go]]]
*Main> runHistory (concat (concat prog)) t
From this, it's obvious that we need to let the basic functions be aware of history/logging. That would be the goal of future posts.
Comment/spoiler: Kenn Knowles has also written a blog post on a Turtle Graphics implementation. Though, the aim of my post(s) is to start out real basic and then evolve the code through several posts. Basically, Kenn's implementation is a "state-over-writer" monad transformer, with a self-implemented difference list as the writer monoid.
Wednesday, October 8, 2008
Better DI
I was watching "InfoQ: Mock Roles Not Object States" with Nat Pryce and Steve Freeman yesterday. I had read the paper previous to watching the video, and I pretty much expected the same content but in a different form. Boy was I wrong..
In particular, they talked a few minutes on an argument against Dependency Injection (DI), namely that the constructor gets lots and lots of arguments. I used DI the first time a couple of weeks ago, so I'm pretty new to it, and I wasn't sure I was doing exactly the right thing. And, yes, the constructors at some places became huge.
What the InfoQ presentation thought me was that not all interfaces a class use are dependencies per se. Classes can for example be equipped with default parts or policies, that can be changed after creation. The important thing is that the instance will work anyway, which is not the case if an instance is missing out a dependency.
This nice picture (screenshot from their slides) made it click for me:
Big thanks to Johan Normén who mentioned the paper on the SweNug meeting two weeks ago!
In particular, they talked a few minutes on an argument against Dependency Injection (DI), namely that the constructor gets lots and lots of arguments. I used DI the first time a couple of weeks ago, so I'm pretty new to it, and I wasn't sure I was doing exactly the right thing. And, yes, the constructors at some places became huge.
What the InfoQ presentation thought me was that not all interfaces a class use are dependencies per se. Classes can for example be equipped with default parts or policies, that can be changed after creation. The important thing is that the instance will work anyway, which is not the case if an instance is missing out a dependency.
This nice picture (screenshot from their slides) made it click for me:
Big thanks to Johan Normén who mentioned the paper on the SweNug meeting two weeks ago!
Tuesday, September 23, 2008
A thought on F#
A while ago I read this blogpost by Phil Wadler. The post is a comment on an article from an issue of Journal of Functional Programming. In that, Yaron Minsky and Stephen Weeks from Jane Street Capital talks about their experiences with functional programming on Wall Street. Wadler quotes from the article:
This made me think of F#, which is inspired by OCaml. Will users of F# tend to not use any OO, just as in Jane Street Capital? Or, put in another way, will F# users tend to only be consumers of OO code (i.e., the .NET framework) and provide a "nice" OO-style API for client code, but "underneath the surface" focus entirerly on the functional part of F#?
Media: Wadler does a funny thing, in the very last minute of his talk (Faith, Evolution, and Programming Languages).
When we first tried switching over from VB to C#, one of the most disturbing features of the language for the partners who read the code was inheritance. They found it difficult to figure out which implementation of a given method was being invoked from a given call point, and therefore, difficult to reason about the code. It is worth mentioning that OCaml actually does support inheritance as part of its object system. That said, objects are an obscure part of the language, and inheritance even more so. At Jane Street, we almost never use objects and never use inheritance. We use standard functional programming techniques and code reviewers find that style more comprehensible. In particular, they can reason by following static properties of the code (module boundaries and functor applications) rather than dynamic properties (what class an object is).
This made me think of F#, which is inspired by OCaml. Will users of F# tend to not use any OO, just as in Jane Street Capital? Or, put in another way, will F# users tend to only be consumers of OO code (i.e., the .NET framework) and provide a "nice" OO-style API for client code, but "underneath the surface" focus entirerly on the functional part of F#?
Media: Wadler does a funny thing, in the very last minute of his talk (Faith, Evolution, and Programming Languages).
Attending events
Tomorrow I will attend SweNug Gbg for the first time. Exciting! I had a real good time at the open space (in Gothenburg) one month ago and I'm hoping that tomorrow will be fun as well. But it's not without sacrifices: I'll have to catch the earliest train to Ystad (@ 05.40) to attend a driving lesson on Thursday.
On 3 October, I will attend Microsofts half-day on data access, in Malmö, where Erik Meijer and Jimmy Nilsson will talk. Erik Meijer is one of the (many) reasons that I chose to focus on .NET rather than Java. After you read this post, go and read all his papers! :)
A funny quote from the event page:
Finally, a note-to-self: try to write shorter posts and more often.
On 3 October, I will attend Microsofts half-day on data access, in Malmö, where Erik Meijer and Jimmy Nilsson will talk. Erik Meijer is one of the (many) reasons that I chose to focus on .NET rather than Java. After you read this post, go and read all his papers! :)
A funny quote from the event page:
[...] in the first half of the last century, mathematicians invented monads, which have subsequently migrated to the computer science mainstream via functional programming (mainly Haskell) and have recently shown up as LINQ in C# 3.0 and Visual Basic.Jimmy Nilsson is the author of Applying Domain-Driven Design and Patterns, which I read this summer, great book! And, by the way, Jimmy is the second best developer in Sweden (rated by ComputerSweden).
Finally, a note-to-self: try to write shorter posts and more often.
Saturday, September 6, 2008
ClickOnce deployment or not?
Some weeks ago, I developed a small program in C# called P4 Explorer. To distribute it, i used a technique called ClickOnce deployment, which is a part of Visual Studio C# Express 2008. This blog post will be about what ClickOnce deployments are, how they can be used, and why you in the end shouldn't use them...
ClickOnce
ClickOnce deployments will simplify your deployment a lot. To deploy your application, all you need to do is:
Now, for you to deploy a new version, all you do is click "Publish". When the user then starts the program, the update will be found and installed (if permitted by the user). Very easy!
Limitations
Application Path
ClickOnce deployment doesn't come for free though. First of, the application won't be installed under "C:\Program Files\MyCompany\MySoftware" as you might expect, but at some secret place (hidden from the user and developer). If you decide to let the installer add a shortcut to your start menu, there is no way to find out where this shortcut points, as it's not a normal shortcut. Right clicking on it and selecting properties will not give you the information you expect, i.e. the path to the executable.
Now, there is a simple way of finding out the path of the executable. When starting the installed application, the property Application.ExecutablePath will be correctly set and you can thus be used to, for example, add to the PATH environment variable:
Parameters
Also, because the program doesn't have a fixed installation path, there is no support for sending parameters to the program. I.e. args in static void Main(string[] args) will always be empty. The documentation at MSDN states that the only way to send parameters to a click once application is to deploy it at a web page, and let the user send html-parameters to it, i.e.:
http://www.yourdomain.com/yourapplication.exe?parameter1=val1¶meter2=val2
But
pathtoyourexe "%1"
This would be both slow and hacky if using the recommended http-style mentioned above. Instead, by using the Application.ExecutablePath property, you can actually access the physical path to the installed executable, and thus add this path to the registry. With this setup correctly, right clicking on a file and selecting your program to open it with, will open the program and send the parameter correctly to it!
Settings
But, even though everything might seem to work out fine, you will soon notice that the Settings object defined in your application will be different if starting the program using the shortcut from your start menu, or right clicking on a file. They are not sharing the same application settings. And if you upgrade the program, all settings in your "right clicked version" will be gone, because the new version will be installed in a new folder.
In the end
Even though click once applications are really easy to deploy, and give you a lot for free, the limitations just makes them really hard to work with. Making simple applications might work, but as soon as you want the user to interact with it through the explorer shell, you are better off making your own installer (or using a free one such as NSIS). So in the end I decided to deploy my program manually using a source control, which made all my problems go away. Sending parameters worked, settings worked, and the application had a fixed folder!
The only thing I was really missing from ClickOnce was the ability for the program to automatically check for updates, which was something I really needed to make sure everyone using my application had the latest version (hopefully having less bugs in it). But it turned out this wasn't so hard to make on my own! The pseudo code for it, printed here, is divided in two programs - your main application, yourapp.exe, and the upgrade application, upgrade.exe.
yourapp.exe:
So, ClickOnce, good for small application, not good for more advanced applications.
ClickOnce
ClickOnce deployments will simplify your deployment a lot. To deploy your application, all you need to do is:
- specify the location to publish the application (i.e. a network drive, web site or ftp),
- specify from what location the users will install it from (propably the same as 1), and
- decide whether the application should be installed at the start menu, or always must be started from the install path (2).
Now, for you to deploy a new version, all you do is click "Publish". When the user then starts the program, the update will be found and installed (if permitted by the user). Very easy!
Application Path
ClickOnce deployment doesn't come for free though. First of, the application won't be installed under "C:\Program Files\MyCompany\MySoftware" as you might expect, but at some secret place (hidden from the user and developer). If you decide to let the installer add a shortcut to your start menu, there is no way to find out where this shortcut points, as it's not a normal shortcut. Right clicking on it and selecting properties will not give you the information you expect, i.e. the path to the executable.
Now, there is a simple way of finding out the path of the executable. When starting the installed application, the property Application.ExecutablePath will be correctly set and you can thus be used to, for example, add to the PATH environment variable:
void UpgradePathVariable()
{
string l_ApplicationFolder = Path.GetDirectoryName(Application.ExecutablePath);
string l_PathVariable = Environment.GetEnvironmentVariable("PATH", EnvironmentVariableTarget.Machine);
if (!l_PathVariable.Contains(l_ApplicationFolder))
{
l_PathVariable = l_OldPathVariable + ";" + l_ApplicationFolder;
}
Environment.SetEnvironmentVariable("PATH", l_PathVariable, EnvironmentVariableTarget.Machine);
}
Parameters
Also, because the program doesn't have a fixed installation path, there is no support for sending parameters to the program. I.e. args in static void Main(string[] args) will always be empty. The documentation at MSDN states that the only way to send parameters to a click once application is to deploy it at a web page, and let the user send html-parameters to it, i.e.:
http://www.yourdomain.com/yourapplication.exe?parameter1=val1¶meter2=val2
But
- this requires your application to be deployed using a web page, and
- you must always download the installer every time you need to sent parameters to the program.
pathtoyourexe "%1"
This would be both slow and hacky if using the recommended http-style mentioned above. Instead, by using the Application.ExecutablePath property, you can actually access the physical path to the installed executable, and thus add this path to the registry. With this setup correctly, right clicking on a file and selecting your program to open it with, will open the program and send the parameter correctly to it!
Settings
But, even though everything might seem to work out fine, you will soon notice that the Settings object defined in your application will be different if starting the program using the shortcut from your start menu, or right clicking on a file. They are not sharing the same application settings. And if you upgrade the program, all settings in your "right clicked version" will be gone, because the new version will be installed in a new folder.
In the end
Even though click once applications are really easy to deploy, and give you a lot for free, the limitations just makes them really hard to work with. Making simple applications might work, but as soon as you want the user to interact with it through the explorer shell, you are better off making your own installer (or using a free one such as NSIS). So in the end I decided to deploy my program manually using a source control, which made all my problems go away. Sending parameters worked, settings worked, and the application had a fixed folder!
The only thing I was really missing from ClickOnce was the ability for the program to automatically check for updates, which was something I really needed to make sure everyone using my application had the latest version (hopefully having less bugs in it). But it turned out this wasn't so hard to make on my own! The pseudo code for it, printed here, is divided in two programs - your main application, yourapp.exe, and the upgrade application, upgrade.exe.
yourapp.exe:
- Is there a new version available (i.e. check using source control)
- Copy upgrade.exe (and dependencies) to a new folder "temp"
- Start new process temp\upgrade.exe
- Close program
- Make sure all instances of yourapp.exe are closed
- Download the latest version (i.e. using source control)
- Start new process ..\
yourapp.exe - Close program
So, ClickOnce, good for small application, not good for more advanced applications.
Friday, August 29, 2008
Podcasts
I admit it: I'm a podcast junkie! When running, commuting, walking, whatever: my iPod Nano (2nd generation) is with me practically everywhere.
Here's some podcasts that I follow. Most of them are followed by recommended episodes. Enjoy!
Hanselminutes
- What is Done? - A Conversation with Scrum Co-Creator Ken Schwaber
- Lean Software Development with Tom and Mary Poppendieck
- Quetzal Bradley on Testing after Unit Tests
Software Engineering Radio
- Simon Peyton Jones on Functional Programming and Haskell
- Retrospectives with Linda Rising (This almost made me cry, seriously!)
- The New Guardian.co.uk website with Matt Wall and Erik DoernenBurg
Deep Fried Bytes
- Talking Domain-Driven Design with David Laribee - Part 1
- Talking Domain-Driven Design with David Laribee - Part 2
Pragmatic Podcasts
- Andy Hunt on Pragmatic Wetware
- Dave Thomas on Pragmatic Publishing
ThoughtWorks - IT Matters Podcast
- Domain Specific Languages - Part 1 of 2 (no permalink)
- Domain Specific Languages - Part 2 of 2 (no permalink)
Alt.NET podcast
- (Listen to all episodes..)
Lean Agile Straight Talk
- Overcoming Impediments to Test-Driven Development
- Test-Driven Development and Design Patterns
MSDN Radio (In Swedish. The homepage is not up to date. Dag König's blog links to more episodes.)
- Magnus Mårtensson, Dotway
Agile Toolkit Podcast
- Ruby Dave
- Smalltalk Dave
.NET Rocks!
- Jon Harrop Makes Us F#
- James Kovacs Inverts our Control
- XML Literals Panel from TechEd 2008
OnSoftware (Video, so this I watch at the computer)
- F#
Also, Channel 9 has great video content!
- Brian Beckman: Don't Fear the Monads
- Anders Hejlsberg, Herb Sutter, Erik Meijer, Brian Beckman: Software Composability and the Future of Programming Languages
- Brian Beckman: Monads, Monoids, and Mort
Here's some podcasts that I follow. Most of them are followed by recommended episodes. Enjoy!
Hanselminutes
- What is Done? - A Conversation with Scrum Co-Creator Ken Schwaber
- Lean Software Development with Tom and Mary Poppendieck
- Quetzal Bradley on Testing after Unit Tests
Software Engineering Radio
- Simon Peyton Jones on Functional Programming and Haskell
- Retrospectives with Linda Rising (This almost made me cry, seriously!)
- The New Guardian.co.uk website with Matt Wall and Erik DoernenBurg
Deep Fried Bytes
- Talking Domain-Driven Design with David Laribee - Part 1
- Talking Domain-Driven Design with David Laribee - Part 2
Pragmatic Podcasts
- Andy Hunt on Pragmatic Wetware
- Dave Thomas on Pragmatic Publishing
ThoughtWorks - IT Matters Podcast
- Domain Specific Languages - Part 1 of 2 (no permalink)
- Domain Specific Languages - Part 2 of 2 (no permalink)
Alt.NET podcast
- (Listen to all episodes..)
Lean Agile Straight Talk
- Overcoming Impediments to Test-Driven Development
- Test-Driven Development and Design Patterns
MSDN Radio (In Swedish. The homepage is not up to date. Dag König's blog links to more episodes.)
- Magnus Mårtensson, Dotway
Agile Toolkit Podcast
- Ruby Dave
- Smalltalk Dave
.NET Rocks!
- Jon Harrop Makes Us F#
- James Kovacs Inverts our Control
- XML Literals Panel from TechEd 2008
OnSoftware (Video, so this I watch at the computer)
- F#
Also, Channel 9 has great video content!
- Brian Beckman: Don't Fear the Monads
- Anders Hejlsberg, Herb Sutter, Erik Meijer, Brian Beckman: Software Composability and the Future of Programming Languages
- Brian Beckman: Monads, Monoids, and Mort
Friday, August 8, 2008
Google Calendar
I love Google Calendar, mostly because of the free sms service, sending you an sms x minutes before an event. What's the point of otherwise having a calendar if you only get notifications from it when sitting in front of a computer, such as how ms exchange and similar works. "You need to be at the dentist in 30 minutes", well hopefully I'm sitting in front of the computer when this message pops up. Of course you can always install a mail service in your mobile, and receive a mail when an appointment is about to happened, and you can buy an expensive mobile phone having it's own calendar which you then can sync with the computer from time to time. But for me, I just want to say "at that time, I need to be there, and wherever I am 30 minutes before that, notify me". That's why I love google calendar, because you can be anywhere and still get a notification!
Now, that's all glorious and so, but as always there are some problems - I want to be able to sync it with other calendars, so that I don't need to open up a browser, enter the adress, and log in every time I need to add an event. For Mozilla Thunderbird there is an add on to support calendars called Lighting, and for this add on there is another one called Provider for Google Calendar, which makes it possible to view and update the google calendar from the calendar within Thunderbird. Niiice!
But hey, wait a minute, the reason I wanted to support google calendar in the first place was because of it's sms service, but as it turns out, the google calendar provider for thunderbird doesn't support this :( So in the end I still need to access google calendar from within a web-browser, and change the settings of all my events manually.
So that's where I'm now, still looking for the perfect solution. Google calendar is still the best option for me, but if you know about a better one then please let me know...
Monday, June 2, 2008
Hacking VS C# 2008 Express
VS C# 2008 Express is a really nice product in many ways - you've got most of the stuff you need, but most of all, it's free!
Though, I was playing around for it a bit, and needed to support compiling against a specific platform (x86 instead of Any CPU which is default). The reason for this was that, in my project, I was refering to a dll written in managed c++ - compiled for x86! Mixing x64 and x86 is never a good solution, and as C# projects per default compiles against Any CPU they will run in 64-bits mode on a 64-bit computer. When the application tries to load the 32-bit dll, the application crashes...
Now, in VS C# 2008 Express, you cannot specify another target platform but the Any CPU platform. Guess they think that "if you need support for specific platforms, you should be able to pay for a commercial version".
I was reading through MSDN, and found this note:
"Damn", I thought, "this is the end. But what does development environment mean?". I had to try it out manually using the command prompt, so I compiled the project in VS and copied the command line output from the output window, being something like:
As it turns out, VS C# 2008 Express does support the /target option, and probably others as well. They only lack the support for the option in the IDE, which is kinda a hacky way of disabling a feature, IMO.
Now, I don't want to build the project using a batch file every time, I would like to build it inside VS IDE so that others can easily build it. As the VS project files are nothing but XML, you can easily open them up in any editor you like to see their contents. I did this to find out if there was some way to add a target flag to the compiler, but unfortunately I couldn't find anything interesting (of course that would be too easy!). I continued with searching the web, and found this strange property:
This proves that C# Express contains more under the hood than what's visible to the eye. With a simple editor, and some knowledge on how to search MSDN, you can get a lot out of this product!
Though, I was playing around for it a bit, and needed to support compiling against a specific platform (x86 instead of Any CPU which is default). The reason for this was that, in my project, I was refering to a dll written in managed c++ - compiled for x86! Mixing x64 and x86 is never a good solution, and as C# projects per default compiles against Any CPU they will run in 64-bits mode on a 64-bit computer. When the application tries to load the 32-bit dll, the application crashes...
Now, in VS C# 2008 Express, you cannot specify another target platform but the Any CPU platform. Guess they think that "if you need support for specific platforms, you should be able to pay for a commercial version".
I was reading through MSDN, and found this note:
Note /platform is not available in the development environment in Visual C# Express.
"Damn", I thought, "this is the end. But what does development environment mean?". I had to try it out manually using the command prompt, so I compiled the project in VS and copied the command line output from the output window, being something like:
C:\WINDOWS\Microsoft.NET\Framework\v3.5\Csc.exe /noconfig /nowarn:1701,1702 ...I put the line in a batch file, and added the /target:x86 flag, compiled it and... it worked! The project got compiled using x86 as a target platform.
As it turns out, VS C# 2008 Express does support the /target option, and probably others as well. They only lack the support for the option in the IDE, which is kinda a hacky way of disabling a feature, IMO.
Now, I don't want to build the project using a batch file every time, I would like to build it inside VS IDE so that others can easily build it. As the VS project files are nothing but XML, you can easily open them up in any editor you like to see their contents. I did this to find out if there was some way to add a target flag to the compiler, but unfortunately I couldn't find anything interesting (of course that would be too easy!). I continued with searching the web, and found this strange property:
CSharpProjectConfigurationProperties3.PlatformTarget Property"Well", I said to my self, "the Properties collection would probably refer to the XML properties found in the project file, so once again I opened up the project file, and browsed to two property groups being:
This member provides internal-only access to C# project configuration properties.
...
External components can access these properties through the Properties collection for the appropriate Visual Studio automation object.
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|x86' ">and
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|x86' ">In these groups, I added the PlatformTarget property:
<PlatformTarget>x86</PlatformTarget>I saved the file, reloaded it in VS, recompiled the project... and viola, it worked! All you needed to do was to find out the magic property to add to the project file! :)
This proves that C# Express contains more under the hood than what's visible to the eye. With a simple editor, and some knowledge on how to search MSDN, you can get a lot out of this product!
Friday, May 23, 2008
Lazy FizzBuzz in Haskell
Christian has been writing too many FizzBuzzes [1,2,3] right now, without me responding! We also got a comment in the first fizzbuzz post from sganslandt I've actually had the code for this post for two weeks now, but haven't had the time to finish it.
The post is based on lazy evaluation, but the neat thing is that is the default behavior in Haskell, in contrast too most other languages. Among other things, lazy evaluation allows you to reason in another way when construcing you software.
Having lazy evaluation in a language makes it "kind of important" to have control over the side-effects, so lazy evaluation and pure functions live in symbiosis in Haskell.
Let's start out with a regular divides..
> divides m n = m `mod` n == 0We then construct a function that takes a divisor, a string, a number, and returns a string. Note that the number is the last argument.
> genWord :: Int -> String -> Int -> StringNow the fun begins! We can generate infinite lists with "fizzes" and "buzzes", just by mapping genWord over an infinite list.
> genWord x str i
> | i `divides` x = str
> | otherwise = ""
> fizzes = map (genWord 3 "Fizz") [1..]So, for example, the first ten elements of fizzes looks like this:
> buzzes = map (genWord 5 "Buzz") [1..]
> nums = [1..]
["","","Fizz","","","Fizz","","","Fizz",""]Now, having three infinite lists, we need to combine those to one infinite list. Is this hard to digest? I guess it's not the "normal routine" that one follows when constructing imperative programs. So, how could this in any way be useful outside the esoteric world of academia? Let us talk a bit more on that after all code, focusing on one thing at a time. Here's combine:
> combine :: String -> String -> Int -> StringIn other words, we concatenate the "fizz" string with the "buzz" string and if the concatenated string is empty, we return a number instead (as a string). This makes the case of "n%15" unnecessary, which is nice. The idea for creating combine is that if we know how to combine one element, we are much closer to a solution for our problem. Though, we do restrict ourselves when defining the type for combine. The only requirement for "n" is that it is showable, but the example is just much more clearer when not using parametrized types..
> combine str1 str2 n
> | null ret = show n
> | otherwise = ret
> where ret = str1 ++ str2
A problem left to solve is the problem of having "a bunch of lists" and returning one list. The function zip does exactly that, and zip3 is function that takes three lists and returns a list of triples (source here).
From ghc's Prelude documentation:
zip3 :: [a] -> [b] -> [c] -> [(a, b, c)]Now, we could rewrite combine to take a triple instead of three arguments. Currying is perhaps not well-known concept, but essentially it is another way of applying arguments to a function:
zip3 takes three lists and returns a list of triples, analogous to zip.
f(x,y) // Normal style (un-curried)
f x y -- Curried
So, we define a function, uncurry3, which takes a function (with three arguments) as the first argument, and returns a function which takes one argument, a triple. A funny thing is that the type declaration is longer than the implementation:
> uncurry3 :: (a -> b -> c -> d) -> ((a, b, c) -> d)Here's the "grande finale"! We apply fizzes, buzzes and nums to zip3, giving an infinite list of triples. We then map the uncurried function (of the "third order") on that infinite list, giving an infinite list of fizzbuzzes with numbers in between.
> uncurry3 f (a,b,c) = f a b c
> resultInf :: [String]Finally, we take the first 100 elements of the infinite list, and print it to stdout.
> resultInf = map (uncurry3 combine) (zip3 fizzes buzzes nums)
> result100 :: [String]
> result100 = take 100 resultInf
> printResult :: IO ()
> printResult = mapM_ putStrLn result100
The complete program is 15 lines of code, exclusive type type declarations, and I argue that the code is easy to read and understand. So, let us talk about why we want to create an esoteric and academic example like this, but first a quick recap..
divides is the mathematical operator '|'
genWord is a rule for generating words
fizzes, buzzes and nums generates sets of values
combine is a rule for combining three elements into one
uncurry3 converts a curried function to function of triples
result100 and printResult deals with presenting the result
Do you see the trace of math here? We treat the lists as sets and define some transformations which makes it easier to reason about the problem, in the spirit of Poyla. We decompose the problem into smaller sub-problems, which makes it much more easy to reason about the program. Separation of Concerns (SoC) is an OO principle that advocates exactly this.
When I think about it, OO has a lot of nice principles which improves the quality of the software, but I argue that OO is extremely hard to get right! Principles such as LawOfDemeter, SoC, MinimalInterface and the Liskov substitution principle, are just a few principles which (often) make good OO software, but are hard to get right all at once. Further, I'm not convinced that these principles are discussed among developers in their daily work, if the developers don't have a "pattern background" or in other ways are interested in OO design. Time pressure is probably the killer here. What is discussed on your (developer) meetings at work?
Finally, if you haven't seen it, watch Brians Beckman's "Don't fear the monads" on Channel9.
Note: as before, this post is written in Literate Haskell, which means that you can save the entire content in this post, paste it into a file and load that file with ghci. Try it!
Update: You might want to read Don Stewart's post (called "Haskell is a strict language") on the optimizations that's going on in ghc. Very nice!
Note: as before, this post is written in Literate Haskell, which means that you can save the entire content in this post, paste it into a file and load that file with ghci. Try it!
Update: You might want to read Don Stewart's post (called "Haskell is a strict language") on the optimizations that's going on in ghc. Very nice!
Thursday, May 22, 2008
FizzBuzz with F#
Once again, I'm going to give a solution to the fizz buzz problem, but this time in a, for me, completely new language - F#!
I've been following the articles on Why I Love F# by Dustin Campell, but even though I was inspired by the language, I never got to download F# and try it... until this very day! I installed it an hour ago, experimented some, and just have to share my experience. As a first mission I tried to write a solution for the fizz buzz problem, and (with some basic knowledge from haskell programming) it didn't take me too long to find a solution. Here it is:
More or less the same solution as the one given by bugrit in the last post. The fizzBuzz function takes an integer and returns the fizzbuzz string representation of that number, using pattern matching. Unfortunate, you have to specify the type of the argument because F# can't infer the type when using x.ToString() (because ToString() is a member function of object, which can be of any type).
The main function uses String.concat to concatenate the strings returned by fizzBuzz. In F#, all basic functions are put in libraries, which as prefixed with their name, such as String and List. They resemble namespaces in C# a lot, and by using the open keyword you can use all functions in a given library globally, without the prefix.
There are a lots of more stuff to find out about the language, for example how to integrate it with your current C# project. I'll blog about it if I have time to try it out!
I've been following the articles on Why I Love F# by Dustin Campell, but even though I was inspired by the language, I never got to download F# and try it... until this very day! I installed it an hour ago, experimented some, and just have to share my experience. As a first mission I tried to write a solution for the fizz buzz problem, and (with some basic knowledge from haskell programming) it didn't take me too long to find a solution. Here it is:
let fizzBuzz (x:int) =
match x with
| x when x % 15 = 0 -> "FizzBuzz"
| x when x % 3 = 0 -> "Fizz"
| x when x % 5 = 0 -> "Buzz"
| x -> x.ToString()
let main = System.Console.WriteLine(String.concat "\n" (List.map fizzBuzz [1..100]))
More or less the same solution as the one given by bugrit in the last post. The fizzBuzz function takes an integer and returns the fizzbuzz string representation of that number, using pattern matching. Unfortunate, you have to specify the type of the argument because F# can't infer the type when using x.ToString() (because ToString() is a member function of object, which can be of any type).
The main function uses String.concat to concatenate the strings returned by fizzBuzz. In F#, all basic functions are put in libraries, which as prefixed with their name, such as String and List. They resemble namespaces in C# a lot, and by using the open keyword you can use all functions in a given library globally, without the prefix.
There are a lots of more stuff to find out about the language, for example how to integrate it with your current C# project. I'll blog about it if I have time to try it out!
Subscribe to:
Posts (Atom)