Planet IronPython

April 25, 2013

Miguel de Icaza

Need for Exercises

For many years, I have learned various subjects (mostly programming related, like languages and frameworks) purely by reading a book, blog posts or tutorials on the subjects, and maybe doing a few samples.

In recent years, I "learned" new programming languages by reading books on the subject. And I have noticed an interesting phenomenon: when having a choice between using these languages in a day-to-day basis or using another language I am already comfortable with, I go for the language I am comfortable with. This, despite my inner desire to use the hot new thing, or try out new ways of solving problems.

I believe the reason this is happening is that most of the texts I have read that introduce these languages are written by hackers and not by teachers.

What I mean by this is that these books are great at describing and exposing every feature of the language and have some clever examples shown to you, but none of these actually force you to write code in the language.

Compare this to Scheme and the book "Structure and Interpretation of Computer Programs". That book is designed with teaching in mind, so at the end of every section where a new concept has been introduced, the authors have a series of exercises specifically tailored to use the knowledge that you just gained and put it to use. Anyone that reads that book and does the exercises is going to be a guaranteed solid Scheme programmer, and will know more about computing than from reading any other book.

In contrast, the experience of reading a modern computing book from most of the high-tech publishers is very different. Most of the books being published do not have an educator reviewing the material, at best they have an editor that will fix your English and reorder some material and make sure the proper text is italicized and your samples are monospaced.

When you finish a chapter in a modern computing book, there are no exercises to try. When you finish it, your choices are to either take a break by checking some blogs or keep marching in a quest to collect more facts on the next chapter.

During this process, while you amass a bunch of information, at some neurological level, you have not really mastered the subject, nor gained the skills that you wanted. You have merely collected a bunch of trivia which most likely you will only put to use in an internet discussion forum.

What books involving an educator will do is include exercises that have been tailored to use the concepts that you just learned. When you come to this break, instead of drifting to the internet you can sit down and try to put your new knowledge to use.

Well developed exercises are an application of the psychology of Flow ensuring that the exercise matches the skills that you have developed and they guide you through a path that keeps you in an emotional state ranging that includes control, arousement and joy (flow).

Anecdote Time

Back in 1988 when I first got the first edition of the "C++ Language", there were a couple of very simple exercises in the first chapter that took me a long time to get right and they both proved very educational.

The first exercises was "Compile Hello World". You might think, that is an easy one, I am going to skip that. But I had decided that I was going to do each and every single of one of the exercises in the book, no matter how simple. So if the exercise said "Build Hello World", I would build Hello World, even if I was already seasoned assembly language programmer.

It turned out that getting "Hello World" to build and run was very educational. I was using the Zortech C++ compiler on DOS back, and getting a build turned out to be almost impossible. I could not get the application to build, I got some obscure error and no way to fix it.

It took me days to figure out that I had the Microsoft linker in my path before the Zortech Linker, which caused the build to fail with the obscure error. An important lesson right there.

On Error Messages

The second exercise that I struggled with was a simple class. The simple class was missing a semicolon at the end. But unlike modern compilers, the Zortech C++ compiler at the time error message was less than useful. It took a long time to spot the missing semicolon, because I was not paying close enough attention.

Doing these exercises trains your mind to recognize that "useless error message gobble gobble" actually means "you are missing a semicolon at the end of your class".

More recently, I learned in this same hard way that the F# error message "The value or constructor 'foo' is not defined" really means "You forgot to use 'rec' in your let", as in:

let foo x =
   if x == 1
     1
   else
     foo (x-1)

That is a subject for another post, but the F# error message should tell me what I did wrong at a language level, as opposed to explaining to me why the compiler is unable to figure things out in its internal processing of the matter.

Plea to book authors

Nowadays we are cranking books left and right to explain new technologies, but rarely do these books get the input from teachers and professional pedagogues. So we end up accumulating a lot of information, we sound lucid at cocktail parties and might even engage in a pointless engineering debate over features we barely master. But we have not learned.

Coming up with the ideas to try out what you have just learned is difficult. As you think of things that you could do, you quickly find that you are missing knowledge (discussed in further chapters) or your ideas are not that interesting. In my case, my mind drifts into solving other problems, and I go back to what I know best.

Please, build exercises into your books. Work with teachers to find the exercises that match the material just exposed and help us get in the zone of Flow.

by Miguel de Icaza (miguel@gnome.org) at April 25, 2013 09:21 PM

April 13, 2013

Miguel de Icaza

Introducing MigCoin

Non-government controlled currency systems are now in vogue. Currencies that are not controlled by some government that might devalue your preciously earned pesos at the blink of an eye.

BitCoin is powered by powerful cryptography and math to ensure a truly digital currency. But it poses significant downsides, for one, governments can track your every move, and every transaction is stored on each bitcoin, making it difficult to prevent a tax audit in the future by The Man.

Today, I am introducing an alternative currency system that both keeps the anonymity of your transactions, and is even more secure than the crypto mumbo jumbo of bitcoins.

Today, I am introducing the MigCoin.

Like bitcoins, various MigCoins will be minted over time, to cope with the creation of value in the world.

Like bitcoins, the supply of MigCoins will be limited and will eventually plateau. Like bitcoin, the MigCoin is immune to the will of some Big Government bureaucrat that wants to control the markets by printing or removing money from circulation. Just like this:

Projected number of Bitcoins and MigCoins over time.

Unlike bitcoins, I am standing by them and I am not hiding behind a false name.

Like BitCoins, MigCoins come with a powerful authentication system that can be used to verify their authenticity. Unlike BitCoins, they do not suffer from this attached "log" that Big Brother and the Tax Man can use to come knocking on your door one day.

How does this genius of a currency work? How can you guarantee that governments or rogue entities wont print their own MigCoins?

The answer is simple my friends.

MigCoins are made of my DNA material.

Specifically, spit.

Every morning, when I wake up, for as long as I remain alive, I will spit on a glass. A machine will take the minimum amount of spit necessary to lay down on a microscope slide, and this is how MigCoins are minted.

Then, you guys send me checks, and I send you the microscope slides with my spit.

To accept MigCoins payments all you have to do is carry a DNA sequencer with you, put the microscope slide on it, press a button, and BAM! 10 minutes later you have your currency validated.

To help accelerate the adoption of MigCoins, I will be offering bundles of MigCoins with the Ilumina MiSeq Personal DNA sequencer:

Some might argue that the machine alone is 125,000 dollars and validating one MigCoin is going to set me back 750 dollars.

Three words my friends: Economy of Scale.

We are going to need a few of you to put some extra pesos early on to get the prices to the DNA machines down.

Early Adopters of MigCoins

I will partner with visionaries like these to get the first few thousands sequencers built and start to get the prices down. Then we will hire that guy ex-Apple guy that was CEO of JC Penney to get his know-how on getting the prices of these puppies down.

Like Bitcoin, I expect to see a lot of nay-sayers and haters. People that will point out flaws on this system. But you know what?

The pace of innovation can not be held back by old-school economists that "don't get it" and pundits on CNN trying to make a quick buck. Hater are going to hate. 'nuff said.

Next week, I will be launching MigXchange, a place where you can trade your hard BitCoins for slabs of spit.

Join the revolution! Get your spit on!

by Miguel de Icaza (miguel@gnome.org) at April 13, 2013 08:29 AM

March 29, 2013

Miguel de Icaza

Exclusive! What we know about the Facebook Phone

We obtained some confidential information about the upcoming Facebook Phone. Here is what we know about it so far:

by Miguel de Icaza (miguel@gnome.org) at March 29, 2013 09:54 PM

March 06, 2013

Miguel de Icaza

How I ended up with Mac

While reading Dave Winer's Why Windows Lost to Mac post, I noticed many parallels with my own experience with Linux and the Mac. I will borrow the timeline from Dave's post.

I invested years of my life on the Linux desktop first as a personal passion (Gnome) and when while awoken for two Linux companies (my own, Ximian and then Novell). During this period, I believed strongly in dogfooding our own products. I believed that both me and my team had to use the software we wrote and catch bugs and errors before it reached our users. We were pretty strict about it: both from an ideological point of view, back in the days of all-software-will-be-free, and then practically - during my tamer business days. I routinely chastised fellow team members that had opted for the easy path and avoided our Linux products.

While I had Macs at Novell (to support Mono on MacOS), it would take a couple of years before I used a Mac regularly. In some vacation to Brazil around 2008 or so, I decided to only take the Mac for the trip and learn to live with the OS as a user, not just as a developer.

Computing-wise that three week vacation turned out to be very relaxing. Machine would suspend and resume without problem, WiFi just worked, audio did not stop working, I spent three weeks without having to recompile the kernel to adjust this or that, nor fighting the video drivers, or deal with the bizarre and random speed degradation that my ThinkPad suffered.

While I missed the comprehensive Linux toolchain and userland, I did not miss having to chase the proper package for my current version of Linux, or beg someone to package something. Binaries just worked.

From this point on, using the Mac was a part-time gig for me. During the Novell layoffs, I returned my laptop to Novell and I was left with only one Linux desktop computer at home. I purchased a Mac laptop and while I fully intended to keep using Linux, the dogfooding driver was no longer there.

Dave Winer writes, regarding Windows:

Back to 2005, the first thing I noticed about the white Mac laptop, that aside from being a really nice computer, there was no malware. In 2005, Windows was a horror. Once a virus got on your machine, that was pretty much it. And Microsoft wasn't doing much to stop the infestation. For a long time they didn't even see it as their problem. In retrospect, it was the computer equivalent of Three Mile Island or Chernobyl.

To me, the fragmentation of Linux as a platform, the multiple incompatible distros, and the incompatibilities across versions of the same distro were my Three Mile Island/Chernobyl.

Without noticing, I stopped turning on the screen for my Linux machine during 2012. By the time I moved to a new apartment in October of 2012, I did not even bother plugging the machine back and to this date, I have yet to turn it on.

Even during all of my dogfooding and Linux advocacy days, whenever I had to recommend a computer to a single new user, I recommended a Mac. And whenever I gave away computer gifts to friends and family, it was always a Mac. Linux just never managed to cross the desktop chasm.

by Miguel de Icaza (miguel@gnome.org) at March 06, 2013 12:49 AM

February 23, 2013

Miguel de Icaza

The Making of Xamarin Studio

We spent a year designing the new UI and features of Xamarin Studio (previously known as MonoDevelop).

I shared some stories of the process on the Xamarin blog.

After our launch, we open sourced all of the work that we did, as well as our new Gtk+ engine for OSX. Lanedo helps us tremendously making Gtk+ 2.x both solid and amazing on OSX (down to the new Lion scrollbars!). All of their work has either been upstreamed to Gtk+ or in the process of being upstreamed.

by Miguel de Icaza (miguel@gnome.org) at February 23, 2013 12:42 AM

November 08, 2012

Miguel de Icaza

"Reality Distortion Field"

"Reality Distortion Field" is a modern day cop out. A tool used by men that lack the intellectual curiosity to explain the world, and can deploy at will to explain excitement or success in the market place. Invoking this magical super power saves the writer from doing actual work and research. It is a con perpetuated against the readers.

The expression originated as an observation made by those that worked with Steve to describe his convincing passion. It was insider joke/expression which has now been hijacked by sloppy journalists when any subject is over their head.

The official Steve Jobs biography left much to be desired. Here a journalist was given unprecedented access to Steve Jobs and get answers to thousands of questions that we have to this day. How did he approach problems? Did he have a method? How did he really work with his team? How did he turn his passion for design into products? How did he make strategic decisions about the future of Apple? How did the man balance engineering and marketing problems?

The biography has some interesting anecdotes, but fails to answer any of these questions. The biographer was not really interested in understanding or explaining Steve Jobs. He collected a bunch of anecdotes, stringed them together in chronological order, had the text edited and cashed out.

Whenever the story gets close to an interesting historical event, or starts exploring a big unknown of Steve's work, we are condescendingly told that "Steve Activated the Reality Distortion Field".

Every. Single. Time.

Not once did the biographer try to uncover what made people listen to Steve. Not once did he try to understand the world in which Steve operated. The breakthroughs of his work are described with the same passion as a Reuters news feed: an enumeration of his achievements glued with anecdotes to glue the thing together.

Consider the iPhone: I would have loved to know how the iPhone project was conceived. What internal process took place that allowed Apple to gain the confidence to become a phone manufacturer. There is a fascinating story of the people that made this happen, millions of details of how this project was evaluated and what the vision for the project was down to every small detail that Steve cared about.

Instead of learning about the amazing hardware and software engineering challenges that Steve faced, we are told over and over that all Steve had to do was activate his special super power.

The biography in short, is a huge missed opportunity. Unprecedented access to a man that reshaped entire industries and all we got was some gossip.

The "Reality Distortion Field" is not really a Steve Jobs super-power, it is a special super power that the technical press uses every time they are too lazy to do research.

Why do expensive and slow user surveys, or purchase expensive research from analysts to explain why some product is doing well, or why people are buying it when you can just slap a "they activated the Reality Distortion Field and sales went through the roof" statement in your article.

As of today, a Google News search for "Reality Distortion Field Apple" reports 532 results for the last month.

Perhaps this is just how the tech press must operate nowadays. There is just no time to do research as new products are being unveiled around the clock, and you need to deliver opinions and analysis on a daily basis.

But as readers, we deserve better. We should reject these explanations for what they are: a cheap grifter trick.

by Miguel de Icaza (miguel@gnome.org) at November 08, 2012 01:57 AM

October 22, 2012

Miguel de Icaza

Mono 3.0 is out

After a year and a half, we have finally released Mono 3.0.

Like I discussed last year, we will be moving to a more nimble release process with Mono 3.0. We are trying to reduce our inventory of pending work and get new features to everyone faster. This means that our "master" branch will remain stable from now on, and that large projects will instead be developed in branches that are regularly landed into our master branch.

What is new

Check our release notes for the full details of this release. But here are some tasty bits:

  • C# Async compiler
  • Unified C# compiler for all profiles
  • 4.5 Async API Profile
  • Integrated new Microsoft's Open Sourced stacks:
    • ASP.NET MVC 4
    • ASP.NET WebPages
    • Entity Framework
    • Razor
    • System.Json (replaces our own)
  • New High performance Garbage Collector (SGen - with many performance and scalability improvements)
  • Metric ton of runtime and class library improvements.

Also, expect F# 3.0 to be bundled in our OSX distribution.

by Miguel de Icaza (miguel@gnome.org) at October 22, 2012 08:15 PM

Miguel de Icaza

The Sophisticated Procrastinator - Volume 1

Let me share with you some links that I found interesting in the past few weeks. These should keep the most diligent person busy for a few hours.

Software Reads

Talbot Crowell's Introduction to F# 3.0 slides from Boston CodeCamp.

Bertrand Meyer (The creator of Eiffel, father of good taste in engineering practices) writes Fundamental Duality of Software Engineering: on the specifications and tests. This is one of those essays where every idea is beautifully presented. A must read.

Good article on weakly ordered CPUs.

MonkeySpace slide deck on MonoGame.

David Siegel shares a cool C# trick, switch expressions.

Oak: Frictionless development for ASP.NET MVC.

Simon Peyton Jones on video talks about Haskell, past, present and future. A very tasty introductory talk to the language. David Siegel says about this:

Simon Peyton-Jones is the most eloquent speaker on programming languages. Brilliant, funny, humble, adorable.

Rob Pike's talk on Concurrency is not Parallelism. Rob is one of the crisper minds in software development, anything he writes, you must read, everything he says, you must listen to.

Answering the question of what is the fastest way to access properties dynamically: DynamicMethod LINQ expressions, MethodInfo. Discussion with Eric Maupin.

OpenGL ES Quick Reference Card, plus a good companion: Apple's Programming Guide.

Interesting Software

SparkleShare, the open source file syncing service running on top of Git released their feature-complete product. They are preparing for their 1.0 release. SparkleShare runs on Linux, Mac and Windows. Check out their Release Notes.

Experts warn that Canonical might likely distribute a patched version that modifies your documents and spreadhseets to include ads and Amazon referal links.

Pheed a twitter competitor with a twist.

Better debugging tools for Google Native Client.

Touch Draw comes to MacOS, great vector drawing application for OSX. Good companion to Pixelmator and great for maintaining iOS artwork. It has great support for structured graphics and for importing/exporting Visio files.

MonoGame 3D on the Raspberry Pi video.

Fruit Rocks a fun little game for iOS.

@Redth, the one man factory of cool hacks has released:

  • PassKitSharp, a library to generate, maintain, process Apple's Passbook files written in C#
  • Zxing.Mobile, an open source barcode library built on top of ZXing (Zebra Crossing) runs on iOS and Android.
  • PushSharp, A server-side library for sending Push Notifications to iOS (iPhone/iPad APNS), Android (C2DM and GCM - Google Cloud Message), Windows Phone, Windows 8, and Blackberry devices.

Coding on Passbook: Lessons Learned.

Building a Better World

Phil Haack blogs about MonkeySpace

Patrick McKenzie writes Designing First Run Experiences to Delight Users.

Kicking the Twitter habit.

Twitter Q&A with TJ Fixman, writer for Insomniac Games.

Debunking the myths of budget deficits: Children and Grandchildren do not pay for budget deficits, they get interest on the bonds.

Cool Stuff

Live updates on HoneyPots setup by the HoneyNet Project.

Updated Programming F# 3.0, 2nd Edition is out. By Chris Smith, a delightful book on F# has been updated to cover the new and amazing type providers in F#.

ServiceStack now has 113 contributors.

News

From Apple Insider: Google may settle mobile FRAND patent antitrust claim.

The Salt Lake City Tribune editorial board endorses Obama over Romney:

In considering which candidate to endorse, The Salt Lake Tribune editorial board had hoped that Romney would exhibit the same talents for organization, pragmatic problem solving and inspired leadership that he displayed here more than a decade ago. Instead, we have watched him morph into a friend of the far right, then tack toward the center with breathtaking aplomb. Through a pair of presidential debates, Romney’s domestic agenda remains bereft of detail and worthy of mistrust.

Therefore, our endorsement must go to the incumbent, a competent leader who, against tough odds, has guided the country through catastrophe and set a course that, while rocky, is pointing toward a brighter day. The president has earned a second term. Romney, in whatever guise, does not deserve a first.

From Blue States are from Scandinavia, Red States are from Guatemala the author looks at the differences in policies in red vs blue states, and concludes:

Advocates for the red-state approach to government invoke lofty principles: By resisting federal programs and defying federal laws, they say, they are standing up for liberty. These were the same arguments that the original red-staters made in the 1800s, before the Civil War, and in the 1900s, before the Civil Rights movement. Now, as then, the liberty the red states seek is the liberty to let a whole class of citizens suffer. That’s not something the rest of us should tolerate. This country has room for different approaches to policy. It doesn’t have room for different standards of human decency.

Esquire's take on the 2nd Presidential Debate.

Dave Winer wrote Readings from News Execs:

There was an interesting juxtaposition. Rupert Murdoch giving a mercifully short speech saying the biggest mistake someone in the news business could make is thinking the reader is stupid. He could easily have been introducing the next speaker, Bill Keller of the NY Times, who clearly thinks almost everyone who doesn't work at the NY Times is stupid.

What do you know, turns out that Bill Moyers is not funded by the government nor does he get tax money, like many Republicans like people to believe. The correction is here.

Twitter Quotes

Joseph Hill

"Non-Alcoholic Sparkling Beverage" - Whole Foods' $7.99 name for "bottle of soda".

Jonathan Chambers

Problem with most religious people is that their faith tells them to play excellently in game of life, but they want to be the referees.

Hylke Bons on software engineering:

"on average, there's one bug for every 100 lines of code" this is why i put everything on one line

Waldo Jaquith:

If government doesn't create jobs, isn't Romney admitting that his campaign is pointless?

Alex Brown

OH "It is a very solid grey area" #sc34 #ooxml

Jo Shields

"I don't care how many thousand words your blog post is, the words 'SYMBIAN WAS WINNING' mean you're too high on meth to listen too.

Jeremy Scahill on war monger Max Boots asks the questions

Do they make a Kevlar pencil protector? Asking for a think tanker.

Max Boot earned a Purple Heart (shaped ink stain on his shirt) during the Weekly Standard War in 1994.

Tim Bray

"W3C teams with Apple, Google, Mozilla on WebPlatform"... or we could all just sponsor a tag on StackOverflow.

David Siegel

Most programmers who claim that types "get in the way" had a sucky experience with Java 12 years ago, tried Python, then threw the baby out.

Outrage Dept

How Hollywood Studios employ creative accounting to avoid sharing the profits with the participants. If you were looking at ways to scam your employees and partners, look no further.

Startvation in Gaza: State forced to release 'red lines' document for food consumption.

Dirty tricks and disturbing trends: Billionaire warn employees that if Obama is reelected, they will be facing layoffs.

Israeli Children Deported to South Sudan Succumb to Malaria:

Here we are today, three months later, and within the last month alone, these two parents lost two children, and the two remaining ones are sick as well. Sunday is already in hospital with malaria, in serious condition, and Mahm is sick at home. “I’ve only two children left,” Michael told me today over the phone. The family doesn’t have money to properly treat their remaining children. The hospitals are at full capacity and more people leave them in shrouds than on their own two feet. I ask you, beg of you to help me scream the story of these children and their fate, dictated by the heartless, immoral Israeli government.

When Suicide is Cheaper the horrifying tales of Americans that can not afford health care.

Paul Ryan is not that different from Todd Akin, when it comes to women rights.

Interesting Discussions, Opinions and Articles

A Windows 8 critique: someone is not very happy with it.

On "meritocracy": what is wrong with it.

Fascinating read on the fast moving space of companies: Intimate Portrait of Innovation, Risk and Failure Through Hipstamatic's Lens.

Kathy Sierra discusses sexism in the tech world. How she changed her mind about it, and the story that prevented her from seeing it.

Response @antirez's sexist piece.

Chrystia Freeland's The Self Destruction of the 1% percent is a great article, which touches on the points of her book of Plutocrats

Sony's Steep Learning Process a look at the changing game with a focus on Sony's challenges.

Entertainment

One Minute Animated Primers on Major Theories on Religion.

Cat fact and Gif provides Cat facts with a gif that goes with it. The ultimate resource of cat facts and gifs.

by Miguel de Icaza (miguel@gnome.org) at October 22, 2012 06:03 AM

October 20, 2012

Miguel de Icaza

Drowning Good Ideas with Bloat. The tale of pkg.m4.

The gnome-config script was a precursor to pkg-config, they are tools used that you can run and use to extract information about the flags needed to compile some code, link some code, or check for a version. gnome-config itself was a pluggable version of Tcl's tclConfig.sh script.

The idea is simple: pkg-config is a tiny little tool that uses a system database of packages to provide version checking and build information to developers. Said database is merely a well-known directory in the system containing files with the extension ".pc", one per file.

These scripts are designed to be used in shell scripts to probe if a particular software package has been installed, for example, the following shell script probes whether Mono is installed in the system:

# shell script

if pkg-config --exists mono; then
    echo "Found Mono!"
else
    echo "You can download mono from www.go.mono.com"
fi

It can also be used in simple makefiles to avoid hardcoding the paths to your software. The following makefile shows how:

CFLAGS = `pkg-config --cflags mono`
LIBS   = `pkg-config --libs mono`

myhost: myhost.c

And if you are using Automake and Autoconf to probe for the existence of a module with a specific version and extract the flags needed to build a module, you would do it like this:

AC_SUBST(MONO_FLAGS)
AC_SUBST(MONO_LIBS)
if pkg-config --atleast-version=2.10 mono; then
    MONO_FLAGS=`pkg-config --cflags mono`
    MONO_LIBS=`pkg-config --libs mono`
else
   AC_MSG_ERROR("You need at least Mono 2.10")
fi

There are two main use cases for pkg-config.

Probing: You use the tool to pobe for some condition about a package and taking an action based on this. For this, you use the pkg-config exit code in your scripts to determine whether the condition was met. This is what both the sample automake and the first script show.

Compile Information: You invoke the tool which outputs to standard output the results. To store the result or pass the values, you use the shell backtick (`). That is all there is to it (example: version=`pkg-config --version`).

The tool is so immensely simple that anyone can learn every command that matters in less than 5 minutes. The whole thing is beautiful because of its simplicity.

The Siege by the Forces of Bloat

Perhaps it was a cultural phenomenon, perhaps someone that had nothing better to do, perhaps someone that was just trying to be thorough introduced one of the most poisoning memes into the pool of ideas around pkg-config.

Whoever did this, thought that the "if" statement in shell was a complex command to master or that someone might not be able to find the backtick on their keyboards.

And they hit us, and they hit us hard.

They introduced pkg.m4, a macro intended to be used with autoconf, that would allow you to replace the handful of command line flags to pkg-config with one of their macros (PKG_CHECK_MODULES, PKG_CHECK_EXISTS). To do this, they wrote a 200 line script, which replaces one line of shell code with almost a hundred. Here is a handy comparison of what these offer:

# Shell style
AC_SUBST(MONO_LIBS)
AC_SUBST(MONO_CFLAGS)
if pkg-config --atleast-version=2.10 mono; then
   MONO_CFLAGS=`pkg-config --cflags mono`
   MONO_LIBS=`pkg-config --libs mono`
else
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
fi

#
# With the idiotic macros
#
PKG_CHECK_MODULES([MONO], [mono >= 2.10],[], [
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
])

#
# If you do not need split flags, shell becomes shorter
#
if pkg-config --atleast-version=2.10 mono; then
   CFLAGS="$CFLAGS `pkg-config --cflags mono`"
   LIBS="$LIBS `pkg-config --libs mono`"
else
   AC_MSG_ERROR(Get your mono from www.go-mono.com)
fi

The above shows the full benefit of using a macro, MONO is a prefix that will have LIBS and CFLAGS extracted. So the shell script looses. The reality is that the macros only give you access to a subset of the functionality of pkg-config (no support for splitting -L and -l arguments, querying provider-specific variable names or performing macro expansion).

Most projects, adopted the macros because they copy/pasted the recipe from somewhere else, and thought this was the right way of doing things.

The hidden price is that saving that few lines of code actually inflicts a world of pain on your users. You will probably see this in your forums in the form of:

Subject: Compilation error

I am trying to build your software, but when I run autogen.sh, I get
the following error:

checking whether make sets $(MAKE)... yes
checking for pkg-config... /usr/bin/pkg-config
./configure: line 1758: syntax error near unexpected token FOO,'
./configure: line 1758:PKG_CHECK_MODULES(FOO, foo >= 2.9)'

And then you will engage in a discussion that in the best case scenario helps the user correctly configure his ACLOCAL_FLAGS, create his own "setup" script that will properly configure his system, and your new users will learn the difference between running a shell script and "sourcing" a shell script to properly setup his development system.

In the worst case scenario, the discussion will devolve into how stupid your user is for not knowing how to use a computer and how he should be shot in the head and taken out to the desert for his corpse to be eaten by vultures; because, god dammit, they should have googled that on their own, and they should have never in the first place have installed two separate automake installations in two prefixes, without properly updating their ACLOCAL_FLAGS or figured out on their own that their paths were wrong in the first place. Seriously, what moron in this day and age is not familiar with the limitations of aclocal and the best practices to use system-wide m4 macros?

Hours are spent on these discussions every year. Potential contributors to your project are driven away, countless hours that could have gone into fixing bugs and producing code are wasted, you users are frustrated. And you saved 4 lines of code.

The pkg.m4 is a poison that is holding us back.

We need to end this reign of terror.

Send pull requests to eliminate that turd, and ridicule anyone that suggests that there are good reasons to use it. In the war for good taste, it is ok to vilify and scourge anyone that defends pkg.m4.

by Miguel de Icaza (miguel@gnome.org) at October 20, 2012 08:08 PM

October 05, 2012

Miguel de Icaza

Why Mitt does not need an Economic Plan

Mitt Romney does not need to have an economic plan. He does not need to have a plan to cut the deficit or to cut services.

It is now well understood that to get the US out of the recession, the government has to inject money into the economy. To inject money into the economy, the US needs to borrow some money and spend it. Borrowing is also at an all-time low, so the price to pay is very low.

Economists know this, and Republicans know this.

But the Republicans top priority is to get Obama out of office at any cost. Even at the cost of prolonging the recession, damaging the US credit rating and keeping people unemployed.

The brilliance of the Republican strategy is that they have convinced the world that the real problem facing the US is the debt. Four years of non-stop propaganda on newspapers and TV shows have turned everyone into a "fiscal conservative". The propaganda efforts have succeeded into convincing the world that US economic policy should be subject to the same laws of balancing a household budget (I wont link to this idiocy).

The campaign has been a brilliant and has forced the Democrats to adopt policies of austerity, instead of policies of growth. Instead of spending left and right to create value, we are cutting. And yet, nobody has stopped the propaganda and pointed out that growth often comes after spending money. Startups start in the red and are funded for several years before they become profitable; Companies go public and use the IPO to raise capital to grow, and for many years they lose money until their investments pay off and allows them to turn the tide.

So this mood has forced Obama to talk about cuts. He needs to be detailed about his cuts, he needs to be a fiscal conservative.

But Economists and Republicans know what the real fix is. They know they have to spend money.

If Romney is elected to office, he will do just that. He will borrow and spend money, because that is the only way of getting out of the recession. That is why his plan does not need to have any substance, and why he can ignore the calls to get more details, because he has no intention to follow up with them.

Obama made a critical mistake in his presidency. He decided to compromise with Republicans, he was begging to be loved by Republicans and in the process betrayed his own base and played right into the Republican's plans.

by Miguel de Icaza (miguel@gnome.org) at October 05, 2012 01:00 AM

October 03, 2012

Miguel de Icaza

Mono 2.11.4 is out

A couple of weeks ago we released Mono 2.11.4; I had not had time to blog about it.

Since our previous release a month before, we had some 240 commits, spread like this:

488 files changed, 28716 insertions(+), 6921 deletions(-)

Among the major updates in this release:

  • Integrated the Google Summer of Code code for Code Contracts.
  • Integrated the Google Summer of Code code for TPL's DataFlow.
  • Plenty of networking stack fixes and updates (HTTP stack, web services stack, WCF)
  • Improvements to the SGen GC.
  • TPL fixes for AOT systems like the iPhone.
  • Debugger now supports batched method invocations.

And of course, a metric ton of bug fixes all around.

Head over to Mono's Download Page to get the goods. We would love to hear about any bugs to have a great stable release.

by Miguel de Icaza (miguel@gnome.org) at October 03, 2012 12:23 AM

October 02, 2012

Miguel de Icaza

TypeScript: First Impressions

Today Microsoft announced TypeScript a typed superset of Javascript. This means that existing Javascript code can be gradually modified to add typing information to improve the development experience: both by providing better errors at compile time and by providing code-completion during development.

As a language fan, I like the effort, just like I pretty much like most new language efforts aimed at improving developer productivity: from C#, to Rust, to Go, to Dart and to CoffeeScript.

A video introduction from Anders was posted on Microsoft's web site.

The Pros

  • Superset of Javascript allows easy transition from Javascript to typed versions of the code.
  • Open source from the start, using the Apache License.
  • Strong types assist developers catch errors before the deploy the code, this is a very welcome addition to the developer toolchest. Script#, Google GWT and C# on the web all try to solve the same problem in different ways.
  • Extensive type inference, so you get to keep a lot of the dynamism of Javascript, while benefiting from type checking.
  • Classes, interfaces, visibility are first class citizens. It formalizes them for those of us that like this model instead of the roll-your-own prototype system.
  • Nice syntactic sugar reduces boilerplate code to explicit constructs (class definitions for example).
  • TypeScript is distributed as a Node.JS package, and it can be trivially installed on Linux and MacOS.
  • The adoption can be done entirely server-side, or at compile time, and requires no changes to existing browsers or runtimes to run the resulting code.

Out of Scope

Type information is erased when it is compiled. Just like Java erases generic information when it compiles, which means that the underling Javascript engine is unable to optimize the resulting code based on the strong type information.

Dart on the other hand is more ambitious as it uses the type information to optimize the quality of the generated code. This means that a function that adds two numbers (function add (a,b) { return a+b;}) can generate native code to add two numbers, basically, it can generate the following C code:

double add (double a, double b)
{
    return a+b;
}

While weakly typed Javascript must generated something like:

JSObject add (JSObject a, JSObject b)
{
    if (type (a) == typeof (double) &&
	type (b) == typeof (double))
	return a.ToDouble () + b.ToDouble ();
    else
	JIT_Compile_add_with_new_types ();
}

The Bad

The majority of the Web is powered by Unix.

Developers use MacOS and Linux workstations to write the bulk of the code, and deploy to Linux servers.

But TypeScript only delivers half of the value in using a strongly typed language to Unix developers: strong typing. Intellisense, code completion and refactoring are tools that are only available to Visual Studio Professional users on Windows.

There is no Eclipse, MonoDevelop or Emacs support for any of the language features.

So Microsoft will need to convince Unix developers to use this language merely based on the benefits of strong typing, a much harder task than luring them with both language features and tooling.

There is some basic support for editing TypeScript from Emacs, which is useful to try the language, but without Intellisense, it is obnoxious to use.

by Miguel de Icaza (miguel@gnome.org) at October 02, 2012 01:35 AM

September 16, 2012

Jeff Hardy's Blog (NWSGI)

Changing .NET Assembly Platforms with Mono.Cecil

By default, .NET assemblies can only be loaded by the platform they are built for, or potentially anything later in the same stream (.NET 4 can load .NET 2 assemblies, Silverlight 5 can load Silverlight 4 assemblies, etc.). However, some of the stuff I’ve been working on for IronPython would be a lot easier if I could just build one assembly and use it anywhere. While it’s not possible with just one assembly, I can generate all of the other assemblies from the base one, with a few caveats.

The problem is caused by Reflection.Emit. IronPython uses RefEmit to generate .NET classes at runtime, and has the ability to store those on disk. However, RefEmit will only generate assemblies for the .NET runtime it is currently running under, which is usually .NET 4. Not all platforms support RefEmit, and re-running the compilation on every platform that needed it would be a pain anyway.

IKVM.Reflection offers a RefEmit-compatible API that can target any platform, but using it would require changing IronPython’s output code to use IKVM RefEmit instead of standard RefEmit when compiling, which is a fairly large change I didn’t want to make right now (maybe for 3.0).

Mono.Cecil is a library for manipulating .NET assemblies. It’s often used to inject code into assemblies and other mundane tasks. What I wanted to know was whether I could take a .NET 4 assembly generated by IronPython and produce a .NET 2 assembly that would run on IronPython for .NET 2. The answer turns out to be yes, but it’s a bit of a pain.

Rewriting assemblies for fun and profit(?)

There are a few things in an assembly that may have to change to get it to work on a different runtime. The first is simple: change the target platform, which is part of the assembly metadata. The next is a bit trickier, but not too bad: change the versions of referenced assemblies to match the platform you are targeting. The third part requires some tedious cataloguing: find any types that are located in different assemblies and change them to point at the correct ones for that target platform. The final piece is the most difficult: potentially, rewrite the actual IL code so that it works on any platform.

The first part, changing the assembly version, is trivial:

ad.Modules[0].Runtime = TargetRuntime.Net_2_0

The second part is not much harder, but requires some extra setup: we need to know what version to change it to. This is potentially an impossible problem, because you don’t always know what types might be present, and any references that aren’t to framework assemblies could break. Right now, I’m just hardcoding everything, but it would be better to pull this  from whatever version of mscorlib.dll is being targeted.

{
    "mscorlib": (2,0,0,0),
    "System": (2,0,0,0),
    "System.Core": (3,5,0,0),
}

The next part of the process is changing the assembly a type belongs in. It’s not actually that hard to do in Mono.Cecil, it just takes a low of upfront knowledge about how things have moved around. In the .NET 2 version of IronPython, the DLR is in Microsoft.Scripting.Core; in .NET 4, it’s in System.Core. Successfully loading a generated assembly means changing the relevant types from System.Core to Microsoft.Scripting.Core. In some cases, the namespace has also changed; the expression trees are in System.Linq.Expressions in .NET 4 but in Microsoft.Scripting.Ast for .NET 2.

The key here is to use Module.GetTypeReferences() to get all of the types an assembly references, and then change the Scope property to point to the new assembly and the Namespace property to the new namespace.

The final part (which is actually done first) is having to rewrite the IL code to replace any unsupported functions with ones that are. Thankfully, there is only one case of that so far: StrongBox<T>(), which exists in .NET 4 (and is used by LambdaExpression.CompileToMethod()) but does not exist in .NET 3.5. The paramaterless constructor call gets replaced by passing null to the constructor that takes an initial value, which is all the paramaterless one does. This is actually pretty straightforward:

strongbox_ctor_v2 = clr.GetClrType(StrongBox).MakeGenericType(Array[object]).GetConstructor((Array[object],))
strongbox_ctor_v4 = clr.GetClrType(StrongBox).MakeGenericType(Array[object]).GetConstructor(Type.EmptyTypes)

method = dcc.Methods[0]
il = method.Body.GetILProcessor()
instr = method.Body.Instructions[4] # This is specific to my use; YMMV
il.InsertBefore(instr, il.Create(OpCodes.Ldnull))
il.Replace(instr, 
    il.Create(OpCodes.Newobj, 
        method.Module.Import(strongbox_ctor_v2)))

There is one caveat here: because of how assembly references work, the IL rewriting should be done before the reference versions are changed, so that there are no stray references to 4.0 assemblies.

Next Steps

This is all just a proof of concept; there’s a few more things to do to make it usable. For example, it needs to be able to look at a set of references and work out which types moved where based on that (this is really important for Windows 8, which moved everything, it seems). Still, the approach seems promising; hopefully there aren’t anymore landmines to deal with.

by jdhardy (noreply@blogger.com) at September 16, 2012 06:22 AM

September 07, 2012

Miguel de Icaza

Free Market Fantasies

This recording of a Q&A with Noam Chomsky in 1997 could be a Q&A session done last night about bailouts, corporate wellfare, and the various distractions that they use from keeping us in the dark, like caring about "fiscal responsibility".

Also on iTunes and Amazon.

by Miguel de Icaza (miguel@gnome.org) at September 07, 2012 08:20 PM

Miguel de Icaza

2012 Update: Running C# on the Browser

With our push to share the kernel of your software in reusable C# libraries and build a native experience per platform (iOS, Android, WP7 on phones and WPF/Windows, MonoMac/OSX, Gtk/Linux) one component that is always missing is what about doing a web UI that also shares some of the code.

Until very recently the answer was far from optimal, and included things like: put the kernel on the server and use some .NET stack to ship the HTML to the client.

Today there are two solid choices to run your C# code on the browser and share code between the web and your native UIs.

JSIL

JSIL will translate the ECMA/.NET Intermediate Language into Javascript and will run your code in the browser. JSIL is pretty sophisticated and their approach at running IL code on the browser also includes a bridge that allows your .NET code to reference web page elements. This means that you can access the DOM directly from C#.

You can try their Try JSIL page to get a taste of what is possible.

Saltarelle Compiler

The Saltarelle Compiler takes a different approach. It is a C# 4.0 compiler that generates JavaScript instead of generating IL. It is interesting that this compiler is built on top of the new NRefactory which is in turn built on top of our C# Compiler as a Service.

It is a fresh, new compiler and unlik JSIL it is limited to compiling the C# language. Although it is missing some language features, it is actively being developed.

This compiler was inspired by Script# which is a C#-look-alike language that generated Javascript for consuming on the browser.

Native Client

I left NativeClient out, which is not fair, considering that both Bastion and Go Home Dinosaurs are both powered by Mono running on Native Client.

The only downside with Native Client today is that it does not run on iOS or Android.

by Miguel de Icaza (miguel@gnome.org) at September 07, 2012 12:00 AM

August 29, 2012

Miguel de Icaza

What Killed the Linux Desktop

True story.

The hard disk that hosted my /home directory on my Linux machine failed so I had to replace it with a new one. Since this machine lives under my desk, I had to unplug all the cables, get it out, swap the hard drives and plug everything back again.

Pretty standard stuff. Plug AC, plug keyboard, plug mouse but when I got to the speakers cable, I just skipped it.

Why bother setting up the audio?

It will likely break again and will force me to go on a hunting expedition to find out more than I ever wanted to know about the new audio system and the drivers technology we are using.

A few days ago I spoke to Klint Finley from Wired who wrote the article titled OSX Killed Linux. The original line of questioning was about my opinion between Gnome 3's shell, vs Ubuntu's Unity vs Xfte as competing shells.

Personally, I am quite happy with Gnome Shell, I think the team that put it together did a great job, and I love how it enabled the Gnome designers -which historically only design, barely hack- to actually extend the shell, tune the UI and prototype things without having to beg a hacker to implement things for them. It certainly could use some fixes and tuning, but I am sure they will address those eventually.

What went wrong with Linux on the Desktop

In my opinion, the problem with Linux on the Desktop is rooted in the developer culture that was created around it.

Linus, despite being a low-level kernel guy, set the tone for our community years ago when he dismissed binary compatibility for device drivers. The kernel people might have some valid reasons for it, and might have forced the industry to play by their rules, but the Desktop people did not have the power that the kernel people did. But we did keep the attitude.

The attitude of our community was one of engineering excellence: we do not want deprecated code in our source trees, we do not want to keep broken designs around, we want pure and beautiful designs and we want to eliminate all traces of bad or poorly implemented ideas from our source code trees.

And we did.

We deprecated APIs, because there was a better way. We removed functionality because "that approach is broken", for degrees of broken from "it is a security hole" all the way to "it does not conform to the new style we are using".

We replaced core subsystems in the operating system, with poor transitions paths. We introduced compatibility layers that were not really compatible, nor were they maintained. When faced with "this does not work", the community response was usually "you are doing it wrong".

As long as you had an operating system that was 100% free, and you could patch and upgrade every component of your operating system to keep up with the system updates, you were fine and it was merely an inconvenience that lasted a few months while the kinks were sorted out.

The second dimension to the problem is that no two Linux distributions agreed on which core components the system should use. Either they did not agree, the schedule of the transitions were out of sync or there were competing implementations for the same functionality.

The efforts to standardize on a kernel and a set of core libraries were undermined by the Distro of the Day that held the position of power. If you are the top dog, you did not want to make any concessions that would help other distributions catch up with you. Being incompatible became a way of gaining market share. A strategy that continues to be employed by the 800 pound gorillas in the Linux world.

To sum up: (a) First dimension: things change too quickly, breaking both open source and proprietary software alike; (b) incompatibility across Linux distributions.

This killed the ecosystem for third party developers trying to target Linux on the desktop. You would try once, do your best effort to support the "top" distro or if you were feeling generous "the top three" distros. Only to find out that your software no longer worked six months later.

Supporting Linux on the desktop became a burden for independent developers.

But at this point, those of us in the Linux world still believed that we could build everything as open source software. The software industry as a whole had a few home runs, and we were convinced we could implement those ourselves: spreadsheets, word processors, design programs. And we did a fine job at that.

Linux pioneered solid package management and the most advance software updating systems. We did a good job, considering our goals and our culture.

But we missed the big picture. We alienated every third party developer in the process. The ecosystem that has sprung to life with Apple's OSX AppStore is just impossible to achieve with Linux today.

The Rise of OSX

When OSX was launched it was by no means a very sophisticated Unix system. It had an old kernel, an old userland, poor compatibility with modern Unix, primitive development tools and a very pretty UI.

Over time Apple addressed the majority of the problems with its Unix stack: they improved compatibility, improved their kernel, more open source software started working and things worked out of the box.

The most pragmatic contributors to Linux and open source gradually changed their goals from "an world run by open source" to "the open web". Others found that messing around with their audio card every six months to play music and the hardships of watching video on Linux were not worth that much. People started moving to OSX.

Many hackers moved to OSX. It was a good looking Unix, with working audio, PDF viewers, working video drivers, codecs for watching movies and at the end of the day, a very pleasant system to use. Many exchanged absolute configurability of their system for a stable system.

As for myself, I had fallen in love with the iPhone, so using a Mac on a day-to-day basis was a must. Having been part of the Linux Desktop efforts, I felt a deep guilt for liking OSX and moving a lot of my work to it.

What we did wrong

Backwards compatibility, and compatibility across Linux distributions is not a sexy problem. It is not even remotely an interesting problem to solve. Nobody wants to do that work, everyone wants to innovate, and be responsible for the next big feature in Linux.

So Linux was left with idealists that wanted to design the best possible system without having to worry about boring details like support and backwards compatibility.

Meanwhile, you can still run the 2001 Photoshop that came when XP was launched on Windows 8. And you can still run your old OSX apps on Mountain Lion.

Back in February I attended FOSDEM and two of my very dear friends were giggling out of excitement at their plans to roll out a new system that will force many apps to be modified to continue running. They have a beautiful vision to solve a problem that I never knew we had, and that no end user probably cares about, but every Linux desktop user will pay the price.

That day I stopped feeling guilty about my new found love for OSX.

Update September 2nd, 2012

Clearly there is some confusion over the title of this blog post, so I wanted to post a quick follow-up.

What I mean with the title is that Linux on the Desktop lost the race for a consumer operating system. It will continue to be a great engineering workstation (that is why I am replacing the hard disk in my system at home) and yes, I am aware that many of my friends use Linux on the desktop and love it.

But we lost the chance of becoming a mainstream consumer OS. What this means is that nobody is recommending a non-technical person go get a computer with Linux on it for their desktop needs (unless you are doing it so for idelogical reasons).

We had our share of chances. The best one was when Vista bombed in the marketplace. But we had our own internal battles and struggles to deal with. Some of you have written your own takes of our struggled in that period.

Today, the various Linux on the desktops are the best they have ever been. Ubuntu and Unity, Fedora and GnomeShell, RHEL and Gnome 2, Debian and Xfce plus the KDE distros. And yet, we still have four major desktop APIs, and about half a dozen popular and slightly incompatible versions of Linux on the desktop: each with its own curated OS subsystems, with different packaging systems, with different dependencies and slightly different versions of the core libraries. Which works great for pure open source, but not so much for proprietary code.

Shipping and maintaining apps for these rapidly evolving platforms is a big challenge.

Linux succeeded in other areas: servers and mobile devices. But on the desktop, our major feature and our major differentiator is price, but comes at the expense of having a timid selection of native apps and frequent breakage. The Linux Hater blog parodied this on a series of posts called the Greatest Hates.

The only way to fix Linux is to take one distro, one set of components as a baseline, abadone everything else and everyone should just contribute to this single Linux. Whether this is Canonical's Ubutu, or Red Hat's Fedora or Debian's system or a new joint effort is something that intelligent people will disagree until the end of the days.

by Miguel de Icaza (miguel@gnome.org) at August 29, 2012 09:09 PM

August 14, 2012

Miguel de Icaza

Mono 2.11.3 is out

This is our fourth preview release of Mono 2.11. This version includes Microsoft's recently open sourced EntityFramework and has been updated to match the latest .NET 4.5 async support.

We are quite happy with over 349 commits spread like this:

 514 files changed, 15553 insertions(+), 3717 deletions(-)

Head over to Mono's Download Page to get the goods.

by Miguel de Icaza (miguel@gnome.org) at August 14, 2012 12:04 AM

August 11, 2012

Miguel de Icaza

Hiring: Documentation Writer and Sysadmin

We are growing our team at Xamarin, and we are looking to hire both a documentation writer and a system administrator.

For the documentation writer position, you should be both familiar with programming and API design and be able to type at least 70 wpms (you can check your own speed at play.typeracer.com). Ideally, you would be based in Boston, but we can make this work remotely.

For the sysadmin position, you would need to be familiar with Unix system administration. Linux, Solaris or MacOS would work and you should feel comfortable with automating tasks. Knowledge of Python, C#, Ruby is a plus. This position is for working in our office in Cambridge, MA.

If you are interested, email me at: miguel at xamarin.

by Miguel de Icaza (miguel@gnome.org) at August 11, 2012 09:17 PM

August 10, 2012

Miguel de Icaza

Hiring: Documentation Writer and Sysadmin

We are growing our team at Xamarin, and we are looking to hire both a documentation writer and a system administrator.

For the documentation writer position, you should be both familiar with programming and API design and be able to type at least 70 wpms (you can check your own speed at play.typeracer.com). Ideally, you would be based in Boston, but we can make this work remotely.

For the sysadmin position, you would need to be familiar with Unix system administration. Linux, Solaris or MacOS would work and you should feel comfortable with automating tasks. Knowledge of Python, C#, Ruby is a plus. This position is for working in our office in Cambridge, MA.

If you are interested, email me at: miguel at xamarin.

by Miguel de Icaza (miguel@gnome.org) at August 10, 2012 07:15 PM

August 09, 2012

C. J. Adams-Collier

Linus on Instantiation and Armadaification

I feel a sense of pride when I think that I was involved in the development and maintenance of what was probably the first piece of software accepted into Debian which then had and still has direct up-stream support from Microsoft. The world is a better place for having Microsoft in it. The first operating system I ever ran on an 08086-based CPU was MS-DOS 2.x. I remember how thrilled I was when we got to see how my friend’s 80286 system ran BBS software that would cause a modem to dial a local system and display the application as if it were running on a local machine. Totally sweet.

When we were living at 6162 NE Middle in the nine-eight 292, we got an 80386 which ran Doom. Yeah, the original one, not the fancy new one with the double barrel shotgun, but it would probably run that one, too. It was also totally sweet and all thanks to our armadillo friends down south and partially thanks to their publishers, Apogee. I suckered my brothers into giving me their allowance from Dad one time so that we could all go in on a Sound Blaster Pro 16 sound card for the family’s 386. I played a lot of Team Fortress and Q2CTF on that rig. I even attended the Quake 3 Arena launch party that happened at Zoid‘s place. I recall that he ported the original quake to Linux. I also recall there being naughty remarks included in the README.txt.

When my older brother, Aaron turned 16, he was gifted a fancy car. When asked what type of car I would like when I turned 16, I said that I’d prefer a computer instead. So I got a high-end 80486 with math co-processor. It could compile the kernel in 15 minutes flat. With all the bits turned on on in /usr/src/linux/.config. But this was later. I hadn’t even heard of linux when I got my system. I wanted to be entertained by the thing. I made sure to get a CD-Rom and a sound card. I got on the beta for Ultima Online and spent a summer as a virtual collier. Digging stuff out of mines north of Britannia and hauling them to town to make weapons and armor out of them. And then setting out in said armor only to be PK’d because I forgot healing potions and I was no good at fighting.

While I was in the middle of all this gaming, my friend Lucas told me that I should try out this lynx thing that they run at the University of Washington. He heard that it was reported to run doom faster than it ran on MS-DOS. It turns out that it did, but that it was not, in fact, called lynx. Or pine. The Doom engine ran so fast that the video couldn’t keep up. This was probably because they didn’t use double buffering for frame display, since they didn’t want to waste the time maintaining and switching context. I think I downloaded the boot/root 3.5″ disk pair and was able to get the system to a shell with an on-phone assist from the Rev. I then promptly got lost in bash and the virtual terminals (OMG! I GET SIX CONSOLES!?) and bought a book on the subject. It shipped with slackware. Which I ran. Until Debian came along. Lucas also recommended that I try out this IRC thing, so I did. And I’m still doing it on #linpeople just like I did back then.

I learned to write Pascal on dos. Then I learned c while they were trying to teach me c++. I learned emacs and vi when I was attending North Kitsap High School. I learned sed and only a little awk when I took Running Start classes in Lynnwood at Edmonds Community College and perl & x.509 while attending Olympic Community College and simultaneously jr-administering Sinclair Communications. I studied TCP/IP, UNP, APUE, C and algorithms & data structures while preparing for an interview with a company whose CEO claimed to have invented SCSI. I learned PGP and PHP while writing web-based adware for this company. I didn’t want to write ads and instead wanted to work in security, so took a job with Security Portal. While there, I wrote what one might call a blogging platform. It worked and made it possible for authors to write prose and poetry. Editors didn’t have to manage a database in order to review and publish the posts that were “ready.” Everyone but me was able to avoid html and cgi.

Then I sold pizza. Then I helped bring the bombay company onto the interwebs using the Amazon ECS (now AWS) platform. Then I helped support MaxDB. Then I helped develop and maintain the Amazon blogging platform. And then attempted to reduce the load on the Amazon pager system by doing and enforcing code reviews. It turns out that they prefer to run their support team at full bore and a load average of 16.

I am now, still, fully employed in an effort to make hard things possible. The hard thing we’re working on now is the implementation and ongoing operations of distributed x.500 infrastructure. This includes request handling, processing and delivery of response (à la HTTP, SMTP, IMAP, SIP, RTP, RTSP, OCSP) including authentication, authorization and auditing (AAA) of all transactions. It’s a hard thing to get right, but our product development team gets it right. Consistently and reliably. We make mistakes sometimes (sorry Bago), but we correct them and make the product better.

I’m the newest member of an R and d team (note: big R, little d) called NTR, which sits behind the firewall that is Product Development, out of production space. In a manner that reminds me of Debian Testing. We try new things. Our current project is to allow users to compare their current (cloud-based or iron-based) IT system with what their system would be like with a BIG-IP in front of it. I can probably come up with a demo if anyone’s interested in checking it out. I’ll go work on that now.

by C.J. Adams-Collier at August 09, 2012 06:46 PM

August 06, 2012

Mike Stall

Reflection vs. Metadata

Here are some old notes I had about Reflection vs. the raw IMetadata Import interfaces. They’re from a while ago (before CLR 4.0 was shipped!), but still relevant. Better to share late than never!

Quick reminder on the two APIs I’m comparing here:

  • Reflection is the managed API (System.Type and friends) for reading metadata. The CLR’s implementation is built on top of the CLR loader, and so it’s geared towards a “live” view of the metadata.  This is what everybody uses because it’s just so easy. The C# typeof() keyword gives you a System.Type and you’re already into the world of reflection.
  • IMetaDataImport is an unmanaged COM-classic API which is much lower level. ILDasm is built on IMetadataImport.

The executive summary is that the IMetadata APIs are a file format decoder and returns raw information. The Reflection APIs are a much higher abstraction level that include the metadata and other information in the PE file, fusion, CLR loader, and present a high-level type-system object model.

This difference means that while Reflection and Metadata are conceptually similar, there are things in Reflection that aren’t in the metadata and things in the metadata that aren’t exposed in reflection

This is not an exhaustive list.

Differences because Reflection can access the loader

Reflection explicitly does eager assembly loading

The only input to IMetaDataImport is the actual bits in a file. It is a purely static API.

In contrast, reflection is a veneer over the CLR loader and thus can use the CLR Loader, fusion, assembly resolution, and policy from current CLR bindings as input.

In my opinion, that’s the most fundamental difference between them, and the root cause of many other differences.

This means that reflection can use assembly resolution, and that causes many differences with raw IMetadataImport:

  1. auto resolving a TypeRef and TypeSpecs to a TypeDef. You can't even retrieve the original typeRef/TypeSpec tokens from the reflection APIs (the metadata tokens it gives back are for the TypeDefs). Same for MethodDef vs. MemberRef.
  2. Type.GetInterfaces() - it returns interfaces from the base type.
  3. Type.GetGenericArguments() - as noted here already.
  4. random APIs like: Type.IsClass, IsEnum, IsValueType - these check the base class, and thus force resolution.
  5. determining if a MemberRef token is a constructor or a method because they have different derived classes (see below).
  6. representing illegal types ("Foo&&"). Reflection is built on the CLR Loader, which eagerly fails when loading an illegal type. (See below.)
  7. Assembly.GetType(string name) will automatically follow TypeForwarders, which will cause assembly loading.

The practical consequence is that a tool like ILDasm can use IMetaDataImport to inspect a single assembly (eg, Winforms.dll) without needing to do assembly resolution. Whereas a reflection-based tool would need to resolve the assembly references.

Different inputs

While Reflection is mostly pure and has significant overlap with the metadata, there is no barrier to prevent runtime input sources from leaking through the system and popping up in the API.  Reflection exposes things not in the PE-file, such as:

  1. additional interfaces injected onto arrays by the CLR loader  (see here).
  2. Type.get_Guid - if the guid is not represented in the metadata via a Guid attribute, reflection gets the guid via a private algorithm buried within the CLR. 

Generics + Type variables + GetGenericArguments()

In Reflection, calling GetGenericArguments() on a open generic type returns the System.Type objects for type-variables. Whereas in metadata, this would be illegal. You could at best get the generic argument definitions from the type definition.

In reflection, if you pass in Type variables to Type.MakeGenericType(), you can get back a type-def. Whereas in metadata, you'd still have a generic type. Consider the following snippet:

var t = typeof(Q2<>); // some generic type Q2<T>
var a1 = t.GetGenericArguments(); // {"T"}
var t2 = t.MakeGenericType(a1); 
Debug.Assert(t.Equals(t2)); 
Debug.Assert(t2.Equals(t)); 

In other words, metadata has 2 distinct concepts:

  1. The generic type arguments in the type definition (see IMDI2::EnumGenericParams)
  2. The generic type arguments from a type instantiation (as retrieved from a signature blob, see CorElementType.GenericInsantiation=0x15).

In reflection, these 2 concepts are unified together under the single Type.GetGenericArguments() API. Answering #1 requires type resolution whereas #2 can be done on a type-ref. This means that in reflection, you can't check for generic arguments without potentially doing resolution.

PE-files

Reflection exposes interesting data in the PE-file, regardless of whether it's stored in the metadata or rest of the PE-file.

Metadata is just the metadata blob within the PE file. It's somewhat arbitrary what's in metadata vs. not. Metadata can have RVAs to point to auxiliary information, but the metadata importer itself can't resolve those RVAs.

Interesting data in the PE but outside of the metadata:

  1. the entry point token is in CorHeaders outside of the metadata.
  2. Method bodies (including their exception information) are outside the metadata. 
  3. RVA-based fields (used for initializing constant arrays)
  4. embedded resources

File management

It is possible to set policy on AppDomain that will require Assembly.Load to make a shadow copy of an assembly before it's loaded and open that copy instead of assembly in the original location. This policy allows user to specify where shadow copies should be created.

Failure points

IMetadataImport only depends on the bits in the file, so it has few failure points after opening. In contrast, Reflection has many dependencies, each of which can fail. Furthermore, there is no clear mapping between a reflection API and the services it depends on, so many reflection APIs can randomly fail at random points.

Also, IMetadataImport allows representing invalid types, whereas Reflection will eagerly fail. For example, it is illegal to have a by-ref to a by-ref, (eg, "Foo&&"). Such a type can still be encoded in the metadata file format via ildasm, and IMetaDataImport will load it and provide the signature bytes. However, Reflection will eagerly fail importing because the CLR Loader won't load the type.

Detecting failures requires eagerly resolving types, so there is a tension between making Reflection a deferred API vs. keeping the eager-failure semantics.

COM-Interop

Reflection represents types at runtime, like COM-interop objects. Whereas metadata only provides a static typing.

Loader-added interfaces on arrays

In .Net 2.0, generic interfaces are added for arrays at runtime. So this expression "typeof(int[]).GetInterfaces()" returns a different result on .NET 2.0 vs. .NET 1.1; even if it's an identical binary. I mentioned this example in more detail here.

Differences in Object model

Reflection deliberately tries to be a higher level friendly managed API, and that leads to a bunch of differences

MemberInfo.ReflectedType property

Reflection has a ReflectedType property which is set based on how an item is queried. So the same item, queried from different sources, will have a different ReflectedType property, and thus compare differently. This property is entirely a fabrication of the reflection object model and does not correspond to the PE file or metadata.

Different object models for Type

The raw metadata format is very precise and represents types in a variety of distinct ways:

  1. TypeDef
  2. TypeRef
  3. TypeSpec, Signature blobs
  4. builtins ("I4")
  5. arrays,
  6. modifiers (pointer, byref)
  7. Type variables (!0, !!0)

These are all unique separate entities with distinct properties which in the metadata model, conceptually do not share a base class. In contrast, Reflection unifies these all into a single common System.Type object. So in reflection, you can't find out if your class’s basetype is a TypeDef or a TypeRef.

Psuedo-custom attributes

Reflection exposes certain random pieces of metadata as faked-up custom attribute instead of giving it a dedicated API the way IMetaDataImport does.

These "pseudo custom attributes" (PCAs) show up in the list of regular custom attributes with no special distinction. This means that requesting the custom attributes in reflection may return custom attributes not specified in the metadata. Since different CLR implementations add different PCAs, this list could change depending on the runtime you bind against.

Some examples are the Serialization and TypeForwardedTo attributes.

Custom Attributes

To get an attribute name in reflection, you must do CustomAttributeData.Constructor.DeclaringType.FullName.

This is a cumbersome route to get to the custom attribute name because it requires creating several intermediate objects (a ConstructorInfo and a Type), which may also require additional resolution. The raw IMetadataImport interfaces are much more streamlined.

ConstructorInfo vs. MethodInfo

Metadata exposes both Constructors and MethodInfos as tokens of the same type (mdMethodDef). Reflection exposes them as separate classes which both derive from MemberBase. This means that in order to create a reflection object over a MemberDef token, you must do some additional metadata resolution to determine whether to allocate a MemberInfo or a ConstructorInfo derived class. This has to be determine when you first allocate the reflection object and can't be deferred, so it forces eager resolution again.

Type Equivalence and Assignability

Reflection exposes a specific policy for type equivalence, which it inherits from the CLR loader. Metadata just exposes the raw properties on a type and requires the caller to determine if types are equivalent.

For example, Reflection has the Type.IsAssignableFrom API, which may invoke the CLR Loader and Fusion, as well as CLR-host version specific Type Unification policies (such as no-PIA support) to determine if types are considered equal. The CLR does not fully specify the behavior of Type.IsAssignableFrom,.

Case sensitivity matching

Metadata string APIs are case sensitive. Reflection string APIs often take a "ignoreCase" flag to facilitate usage with case insensitive languages, like VB.

by Mike Stall - MSFT at August 06, 2012 09:21 PM

August 04, 2012

Mike Stall

Converting between Azure Tables and CSV

I published a nuget package (CsvTools.Azure) to easily read/write CSVs to azure blobs and tables.  It builds on the CSV reader, also on Nuget (see CsvTools) and GitHub (https://github.com/MikeStall/DataTable ).

Azure Tables are very powerful, but can be tricky to use. I wanted something that:

  1. handled basic scenarios, such as uploading a CSV file to an Azure table with strongly typed columns, and downloading an Azure table as a CSV that I could then open in Excel. 
  2. Was easy to use and could accomplish most operations in a single line.
  3. Could still be type-safe.
  4. Had intelligent defaults. If you didn’t specify a partition key, it would infer one. If the defaults weren’t great, you could go back and improve them.

The CsvTools.Azure nuget package adds extension methods for DataTable, contained in the core CsvTools package.  These extension methods save a DataTable to an Azure blob or table, and can read a DataTable from an azure blob or table.

Examples with Azure Blobs

Writing to and from blobs is easy, since blobs resemble the file system.  Here’s an example to write a data and read it back from blob storage.

        var dt = DataTable.New.Read(@"c:\temp\test.csv");

        // Write and Read from blobs
        dt.SaveToAzureBlob(Account(), "testcontainer", "test.csv");
        var dataFromBlob = DataTable.New.ReadAzureBlob(Account(), "testcontainer", "test.csv"); // read it back

These code snippets assume a sample CSV at c:\temp\test.csv:

name, species, score
Kermit, Frog , 10
Ms. Piggy, Pig , 50
Fozzy, Bear , 23

Examples with Azure Tables

The scenarios I find interesting with Csv and Azure Tables are:

  1. Ingress: Uploading a CSV as an azure table. I successfully uploaded a 3 million row CSV into Azure using this package. While CSVs don’t support indexing, once in Azure, you can use the standard table query operators (such as lookup by row and partition key)
  2. Egress: download an Azure table to a CSV. I find this can be useful for pulling down a local copy of things like logs that are stored in azure tables.

 

Azure Tables have some key differences from a CSV file:

  Azure Tables CSV
special columns and indexing every row in an Azure Tables has a Partition and Row key. These keys combine to form a unique index and have several other key properties documented on MSDN. No unique keys for indexing, and no mandated columns.
Schema? Each row can have its own schema. All rows have the same schema. A CSV is conceptually a 2d array of strings.
Typing The “columns” in an Azure Tables are strongly typed. CSVs are all strings
naming table and column names are restricted. See naming rules on msdn. No naming restriction on columns.

Practically, this means when “uploading” a CSV to an Azure Table, we need to provide the types of the columns (or just default to everything being strings). When “downloading” an Azure Table to a CSV, we assume all rows in the table have the same schema.

 

Uploading a CSV to an Azure Table

Here’s an example of uplaoding a datatable as an Azure table:

// will fabricate partition and row keys, all types are strings
dt.SaveToAzureTable(Account(), "animals"); 

 

And then the resulting azure table, as viewed via Azure Storage Explorer. You can see the single line only supplied a) an incoming data table, b) a target name for the azure table to be created. So it picked intelligent defaults for the partition and row key, and all columns are typed as string.

image

image

 

We can pass in an Type[] to provide stronger typing for the columns. In this case, we’re saving the “score” column as an int.

// provide stronger typing
var columnTypes = new Type[] { typeof(string), typeof(string), typeof(int) };
dt.SaveToAzureTable(Account(), "animals2", columnTypes);

 

image

How is the partition and row key determined when uploading?

Every entity in an azure table needs a Partition and Row Key.

  1. If the CSV does not have columns named PartitionKey or RowKey, then the library will fabricate values. The partition key will be a constant (eg, everything gets put on the same partition), and the RowKey is just a row counter.
  2. If the csv has a column for PartitionKey or RowKey, then those will be used.
  3. One of the overloads to SaveToAzureTable takes a function that can compute a partition and row key per row.

Here’s an example of the 3rd case, where a user provided function computes the partition and row key on the fly for each row.

dt.SaveToAzureTable(Account(), "animals3", columnTypes, 
(index, row) => new ParitionRowKey { PartitionKey = "x", RowKey = row["name"] });

 

Downloading an Azure Table as a CSV

Here we can download an Azure table to a CSV in a single line.

            var dataFromTable = DataTable.New.ReadAzureTableLazy(Account(), "animals2");

The convention in the CsvTools packages is that methods ending in “Lazy” are streaming, so this can handle larger-than-memory tables.

We can then print it out to the console (or any stream) or do anything else with the table. For example, to just dump the table to the console, do this:

                          dataFromTable.SaveToStream(Console.Out); // print to console

And it prints:

PartitionKey,RowKey,Timestamp,name,species,score
1,00000000,2012-08-03T21:04:08.1Z,Kermit,Frog,10
1,00000001,2012-08-03T21:04:08.1Z,Ms. Piggy,Pig,50
1,00000002,2012-08-03T21:04:08.103Z,Fozzy,Bear,23

Notice that the partition key, row key, and timestamp are included as columns in the CSV.

Of course, once we have a DataTable instance, it doesn’t matter that it came from an Azure Table. We can use any of the normal facilities in CsvTools to operate on the table.  For example, we could use the strong binding to convert each row to a class and then operate on that:"

// Read back from table as strong typing 
var dataFromTable = DataTable.New.ReadAzureTableLazy(Account(), "animals2");
IEnumerable<Animal> animals = dataFromTable.RowsAs<Animal>();
foreach (var row in animals)
{
    Console.WriteLine("{0},{1},{2}%", row.name, row.species, row.score / 100.0);
}
  
// Class doesn't need to derive from TableContext
class Animal
{
    public string name { get ;set; }
    public string species { get ; set; }
    public int score { get ;set; }
}

 

Full sample

Here’s the full sample.

This is a C# 4.0 console application (Client Profile), with a Nuget package reference to CsvTools.Azure, and it uses a dummy csv file at c:\temp\test.csv (see above).

When you add the nuget reference to CsvTools.Azure, Nuget’s dependency management will automatically bring down references to CsvTools (the core CSV reader that implements DataTable) and even the azure storage libraries. I love Nuget.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

using DataAccess;
using Microsoft.WindowsAzure;

class Program
{
    static CloudStorageAccount Account()
    {
        return CloudStorageAccount.DevelopmentStorageAccount;
    }

    static void Main(string[] args)
    {
        var dt = DataTable.New.Read(@"c:\temp\test.csv");

        // Write and Read from blobs
        dt.SaveToAzureBlob(Account(), "testcontainer", "test.csv");
        var dataFromBlob = DataTable.New.ReadAzureBlob(Account(), "testcontainer", "test.csv"); // read it back
        dataFromBlob.SaveToStream(Console.Out); // print to console

        // Write and read from Tables

        // will fabricate partition and row keys, all types are strings
        dt.SaveToAzureTable(Account(), "animals");

        // provide stronger typing
        var columnTypes = new Type[] { typeof(string), typeof(string), typeof(int) };
        dt.SaveToAzureTable(Account(), "animals2", columnTypes);

        {
            Console.WriteLine("get an Azure table and print to console:");
            var dataFromTable = DataTable.New.ReadAzureTableLazy(Account(), "animals2");
            dataFromTable.SaveToStream(Console.Out); // print to console
            Console.WriteLine();
        }

        {
            Console.WriteLine("Demonstrate strong typing");
            // Read back from table as strong typing 
            var dataFromTable = DataTable.New.ReadAzureTableLazy(Account(), "animals2");
            IEnumerable<Animal> animals = dataFromTable.RowsAs<Animal>();
            foreach (var row in animals)
            {
                Console.WriteLine("{0},{1},{2}%", row.name, row.species, row.score / 100.0);
            }
        }

        // Write using a row and parition key        

    }

    // Class doesn't need to derive from TableContext
    class Animal
    {
        public string name { get ;set; }
        public string species { get ; set; }
        public int score { get ;set; }
    }
}


by Mike Stall - MSFT at August 04, 2012 01:02 AM

June 02, 2012

Aaron Marten's WebLog

“Invalid License Data” after VS 11 Beta to VS 2012 RC Upgrade

clip_image001We’ve been seeing reports of some users hitting an “Invalid License Data” error on VS startup after upgrading from VS 11 Beta to VS 2012 RC. This could be due to upgrading from a “higher” Beta SKU (e.g. Ultimate) to a “lower” RC SKU (e.g. Professional).

Fortunately, the fix is simple. Go to the Windows Programs and Features control panel and uninstall the Visual Studio 11 Beta.

by Aaron Marten at June 02, 2012 02:54 PM

May 19, 2012

Mike Stall

Strong binding for CSV reader

 

I updated my open source CSV reader to provide parsing rows back into strongly typed objects. You can get it from Nuget as CsvTools 1.0.6.

For example, suppose we have a CSV file “test.csv” like so:

name, species, favorite fruit, score
Kermit, Frog, apples, 18%
Ms. Piggy, Pig, pears, 22%
Fozzy, Bear, bananas, 19.4%

You can open the CSV and read the rows with loose typing (as strings):

var dt = DataTable.New.Read(@"c:\temp\test.csv");
IEnumerable<string> rows = from row in dt.Rows select row["Favorite Fruit"];

But it’s very convenient to use strongly-typed classes. We can define a strongly-typed class for the CSV:

enum Fruit
{
    apples,
    pears,
    bananas,
}
class Entry
{
    public string Name { get; set; }
    public string Species { get; set; }
    public Fruit FavoriteFruit { get; set; }
    public double Score { get; set; }
}

We can then read via the strongly-typed class as:

var dt = DataTable.New.Read(@"c:\temp\test.csv"); 
Entry[] entries = dt.RowsAs<Entry>().ToArray(); // read all entries

We can also use linq expressions like:

IEnumerable<Fruit> x = from row in dt.RowsAs<Entry>() select row.FavoriteFruit;

 

What are the parsing rules?

Parsing can get arbitrarily complex. This uses some simple rules that solved the scenarios I had.

The parser looks at each property on the strong type, and matches that to a column from the CSV. Since property names are going to be restricted to C# identifiers, whereas row names can have arbitrary characters (and thus be invalid C# identifiers), the matching here is flexible. It will match properties to columns just looking at the alphanumeric characters. So the “FavoriteFruit” property matches to “Favorite Fruit” field name.

To actually parse the row value from a string to the target type, T, it uses the following rules:

  1. if T is already a string, just return the value
  2. special case doubles parsing to allow the percentage sign. (Parse 50% as .50).
  3. if T has a TryParse(string, out T) method, then invoke that.  I found TryParse to be significantly faster than invoking a TypeConverter.
  4. Else use a TypeConverter. This is a general and extensible hook.

Errors are ignored. The rationale here is that if I have 3 million rows of CSV data, I don’t want to throw an exception on reading just because one row has bad data.

Under the hood

DataTable.RowsAs<T>() uses expression trees to build a strongly typed dynamic method of Func<Row, T>. I originally uses reflection to enumerate the properties, and then find the appropriate parsing technique, and set the value on the strong type. Switching to pre-compiled methods was about a 10x perf win.

In this case, the generated method looks something like this:

class EnumParser
{
    const int columnIndex_Name = 0;
    const int columnIndex_species = 1;

    TypeConverter _convertFavoriteFruit = TypeDescriptor.GetConverter(typeof(Fruit));
    const int columnIndex_Fruit = 2;

    const int columnIndex_Score = 3;

    public Entry Parse(Row r)
    {
        Entry newObj = new Entry();
        newObj.FavoriteFruit = (Fruit) _convertFavoriteFruit.ConvertFrom(r.Values[columnIndex_Fruit]);
        newObj.Name = r.Values[columnIndex_Name];
        newObj.Species = r.Values[columnIndex_species];
        newObj.Score = ToDouble(r.Values[columnIndex_Score]);
        return newObj;
    }    
}

The parse method is a Func<Row, Entry> which can be invoked on each row. It’s actually a closure so that it can capture the TypeConverters and only do the lookup once. The mapping between property names and column names can also be done upfront and captured in the columnIndex_* constants.

by Mike Stall - MSFT at May 19, 2012 03:33 PM

May 11, 2012

Mike Stall

Per-controller configuration in WebAPI

We’ve just added support for WebAPI to provide per-controller-type configuration. WebAPI has a HttpConfiguration object that provides configuration such as:

  • route table
  • Dependency resolver for specifying services
  • list of Formatters, ModelBinders, and other parameter binding settings.
  • list of message handlers,

However, a specific controller may need its own specific services. And so we’ve added per-controller-type configuration. In essence, a controller type can have its own “shadow copy” of the global config object, and then override specific settings.  This is automatically applied to all controller instances of the given controller-type. (This supersedes the HttpControllerConfiguration attribute that we had in Beta)

Some of the scenarios we wanted to enable here:

  1. A controller may have its own specific list of Formatters, for both reading and writing objects.
  2. A controller may have special dynamic actions that aren’t based on reflecting over C# methods, and so may need its own private action selector.
  3. A controller may need its own IActionValueBinder. For example, you might have an HtmlController base class that has a MVC-style parameter binder that handles FormUrl data.

In all these cases, the controller is coupled to a specific service for basic correct operation, and these services really are private implementation of the controller that shouldn’t conflict with settings from other controllers.  Per-controller config allows multiple controllers to override their own services and coexist peacefully in an app together.

 

How to setup per-controller config?

We’ve introduced the IControllerConfiguration interface:

public interface IControllerConfiguration
{
    void Initialize(HttpControllerSettings controllerSettings, 
HttpControllerDescriptor controllerDescriptor); }

WebAPI will look for attributes on the controller that implement that interface, and then invoke them when initializing the controller-type. This follows the same inheritance order as constructors, so attributes on the base type will be invoked first.

The controllerSettings object specifies what things on the configuration can be overriden for a controller. This provides static knowledge of what things on a configuration can and can’t be specified for a controller. Obviously, things like message handlers and routes can’t be specified for a per-controller basis.

public sealed class HttpControllerSettings
{
    public MediaTypeFormatterCollection Formatters { get; }
    public ParameterBindingRulesCollection ParameterBindingRules { get; }
    public ServicesContainer Services { get; }        
}

So an initialization function can change the services, formatters, or binding rules. Then WebAPI will create a new shadow HttpConfiguration object and apply those changes. Things that are not changes will still fall through to the global configuration.

 

Example

Here’s an example. Suppose we have our own controller type, and we want it to only use a specific formatter and IActionValueBinder.

First, we add a config attribute:

[AwesomeConfig]
public class AwesomeController : ApiController
{
    [HttpGet]
    public string Action(string s)
    {
        return "abc";
    }
}

That attribute implementss the IControllerConfiguration:

class AwesomeConfigAttribute : Attribute, IControllerConfiguration
{
    public void Initialize(HttpControllerSettings controllerSettings, 
HttpControllerDescriptor controllerDescriptor) { controllerSettings.Services.Replace(typeof(IActionValueBinder), new AwesomeActionValueBinder()); controllerSettings.Formatters.Clear(); controllerSettings.Formatters.Add(new AwesomeCustomFormatter()); } }
This will clear all the default formatters and add our own AwesomeCustomFormatter. It will also the IActionValueBinder to our own AwesomeActionValueBinder.  It also will not affect any other controllers in the system.

Setting a service on the controller here has higher precedence than setting services in the dependency resolver or in the global configuration.

The initialization function can also inspect incoming configuration and modify it. For example, it can append a formatter or binding rule to an existing list.

What happens under the hood?

This initialization function is invoked when WebAPI is first creating the HttpControllerDescriptor for this controller type. It’s only invoked once per controller type. WebAPI will then apply the controllerSettings and create a new HttpConfiguration object. There are some optimizations in place to make this efficient:

  • If there’s no change, it shares the same config object and doesn’t create a new one.
  • The new config object reuses much of the original one. There are several copy-on-write optimization in place. For example, if you don’t touch the formatters, we avoid allocating a new formatter collection.

Then the resulting configuration is used for future instances of controller. Calling code still just gets a HttpConfiguration instance and doesn’t need to care whether that instance was the global configuration or a per-controller configuration. So when the controller asks for formatters or an IActionValueBinder here, it will automatically pull from the controller’s config instead of the global one.

by Mike Stall - MSFT at May 11, 2012 05:50 PM

Mike Stall

WebAPI Parameter binding under the hood

I wrote about WebAPI’s parameter binding at a high level before. Here’s what’s happening under the hood. The most fundamental object for binding parameters from a request in WebAPI is a HttpParameterBinding. This binds a single parameter. The binding is created upfront and then is invoked across requests. This means the binding must be determined from static information such as the parameter’s name, type, or global config.  A parameter binding has a reference to the HttpParameterDescriptor, which provides static information about the parameter from the action’s signature.

Here’s the key method on HttpParameterBinding: 

public abstract Task ExecuteBindingAsync(
ModelMetadataProvider metadataProvider,
HttpActionContext actionContext,
CancellationToken cancellationToken);

This is invoked on each request to perform the actual binding. It takes in the action context (which has the incoming request) and then does the binding and populates the result in the argument dictionary hanging off action context. This method returns a Task in case the binding needs to do an IO operation like read the content stream. 

Examples of bindings

WebAPI has two major parameter bindings: ModelBindingParameterBinder or FormatterParameterBinder.  The first uses model binding, and generally assembles the parameter from the URI. The second uses the MediaTypeFormatters to read the parameter from the content stream.

Ultimately, these are both just derived classes from HttpParameterBinding. Once WebAPI gets the binding, it just invokes the ExecuteBindingAsync method  and doesn’t care about the parameter’s type, it’s name, whether it had a default value, whether it was model binding vs. formatters, etc.

However, you can always add your own. For example, suppose you want to bind action parameters of type IPrincipal to automatically go against the thread’s current principal. Clearly, this does touch the content stream or need the facilities from model binding. You could create a custom binding like so:

    // Example of a binder
    public class PrincipalParameterBinding : HttpParameterBinding
    {
        public PrincipalParameterBinding(HttpParameterDescriptor p) : base(p) { }
 
        public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, 
HttpActionContext actionContext, CancellationToken cancellationToken) { IPrincipal p = Thread.CurrentPrincipal; SetValue(actionContext, p);
var tsc = new TaskCompletionSource<object>(); tsc.SetResult(null); return tsc.Task; } }

The binding really could do anything. You could have custom bindings that go and pull values from a database.

Normally, you wouldn’t need to plug your own HttpParameterBinding. Most scenarios could be solved by plugging a simpler interface, like adding a formatter or model binder.

Who determines the binding?

This is ultimately determined by the IActionValueBinder, which is a pluggable service. Here’s the order that the DefaultActionValueBinder looks in to get a binding. (I described an alternative binder here which has MVC like semantics.)

Look for a ParameterBindingAttribute

The highest precedence is to use a ParameterBindingAttribute, which can be places on a parameter site or a parameter type’s declaration.  This lets you explicitly set the binding for a parameter.

    [AttributeUsage(AttributeTargets.Class | AttributeTargets.Parameter, Inherited = true, AllowMultiple = false)]
    public abstract class ParameterBindingAttribute : Attribute
    {
        public abstract HttpParameterBinding GetBinding(HttpParameterDescriptor parameter);
    }

The virtual function here hints that this is really the base class of a hierarchy. [FromBody] and [ModelBinder] attributes both derive from [ParameterBinding]. [FromUri] derives from [ModelBinder] and just invokes model binding and constrains the inputs to be from the URI.

In our example, we could create our own custom attribute to provide PrincipalParameterBindings.

Look at the ParamterBinding Rules in the Configuration

The HttpConfiguration has a collection of binding rules. This is checked if there is no ParameterBinding attribute.  Here are some examples of setting some binding rules for certain types.

            HttpConfiguration config = new HttpConfiguration();
 
            ParameterBindingRulesCollection pb = config.ParameterBindingRules;
            pb.Insert(typeof(IPrincipal), param => new PrincipalParameterBinding(param)); // custom binder against request
            pb.Insert(typeof(Location), param => param.BindWithModelBinding(new LocationModelBinder())); 
            pb.Insert(typeof(string), param => param.BindWithFormatter(new CustomMediaFormatter()));

The first rule says that all IPrincipal types should be bound using our IPrincipal binder above.

The second rule says that all Location types should be bound using Model Binding, and specifically use the LocationModelBinder  (which would implement IModelBinder).

The third rule says that all strings should be bound with the formatters.

Rules are executed in order and work against exact type matches.

The binding rules actually operate on a ParameterDescriptor. The Insert() methods above are just using a helper that filters based on the parameter’s type. So you could add a rule that binds on a parameter’s name, or even if the par

Setting rules lets your config describe how types should be bound, and alleviates needing to decorate every callsite with an attribute.

The configuration has some default entries in the parameter binding rules collection:

  • bind the cancellation token
  • bind the HttpRequestMessage  (without this rule, HttpRequestMessage would be seen as a complex object and so we’d naturally try to read it from the body using a formatter)
  • prevent accidentally binding any class derived from HttpContent. (This is trying to protect users from accidentally having a formatter try to bind)

Since these are just regular entries in the rule collection, you can supersede them by inserting a broader rule in front of them. Or you can clear the collection completely. 

The rules here are extremely flexible and can solve several scenarios:

  1. Allow you to override WebAPI’s behavior for special types like cancellation token, HttpContent, or HttpRequestMessage. For example, if you did want the HttpContent to bind against the Request.Content, you could add a rule for that. Or if you had multiple cancellation tokens floating around and wanted to bind them by name, you could add a rule for that too.
  2. Specify whether a type should use model binding or formatter by default.  Maybe you have a complex type that should always use model binding (eg, Location in the above example). Just adding a formatter doesn’t mean that a type automatically uses it. Afterall, you could add a formatter and model binder for the same type. And some formatters and model binders eagerly claim to handle all types (eg, JSON.Net thinks it can handle anything, even a wacky type like a delegate or COM object). Same for model binding. So WebAPI needs a hint, and parameter binding rules can provide that hint.
  3. Create a binding rule once that applies globally, without having to touch up every single action signature.
  4. Create binding rules that require rich type information. For example, you could create a “TryParse” rule that looks if a parameter type has a “bool TryParse(string s, out T)” method, and if so, binds that parameter by invoking that method.
  5. Instead of binding by type, bind by name, and coerce the value to the given parameter type.

Fallback to a default policy

If there is no attribute, and there is no rule that claims the parameter descriptor, than the default binder falls back to its default policy. That’s basically simple types are model bound against the URI, and complex types are read from the body using formatters.

by Mike Stall - MSFT at May 11, 2012 05:25 AM

April 24, 2012

Mike Stall

Excel on Azure

 

I amended my open-source CsvTools with an Excel reader. Once I read the excel worksheet into a datatable, I can use all the data table operators from the core CsvTools, including enumeration, Linq over the rows, analysis, mutation, and saving back out as a CSV. So this gives be a Linq-to-Excel on Azure experience, which ought to win a buzzword bingo contest!

The excel reader uses the OpenXml SDK, and so it can run on Azure.  This is useful because Excel as a COM-object doesn’t run on servers, and so I couldn’t upload excel files to my ASP.Net projects without really fighting the security settings. With OpenXml, it’s easy since you’re just reading XML.

Here’s a little azure MVC test page that demonstrates uploading a xlsx file and displaying the contents in azure:

(side note: deploying MVC to Azure is super easy, courtesy of this great tutorial).

I also need to give a shout-out for Nuget! The dependency management here was great. I have one Nuget package for the core CsvTools (which is just the CSV reader with no dependencies) , and another package CsvTools.Excel (which has a dependency on CsvTools and the OpenXml SDK).

The excel reader is an extension method exposed off “DataTable.New”, so it’s easily discoverable.

Here’s a sample excel sheet, foo.xlsx:

image

And then the code to read it from C#:

private static void TestExcel()
{
    var dt = DataTable.New.ReadExcel(@"c:\temp\foo.xlsx");
    var names = from row in dt.Rows where int.Parse(row["age"]) > 10 select row["Name"];
    foreach (var name in names)
    {
        Console.WriteLine(name);
    }            
}

This example just reads the first worksheet in the workbook, which is the common case for my usage scenarios where people are using excel as a CSV format.  It prints:

Ed
John

There  are also some other overloads to give the whole list of worksheets.

public static IList<MutableDataTable> ReadExcelAllSheets(this DataTableBuilder builder, string filename);
public static IList<MutableDataTable> ReadExcelAllSheets(this DataTableBuilder builder, Stream input);
 

The reader is intended for Excel workbooks that represent tabular data and is not hardened against weird or malformed input.

Anyway, I’m finding this useful for some experiments, and sharing in case somebody else finds it useful too. 

(Now I just need to throw in a WebAPI parameter binding for DataTables, use WebAPI’s query string support, and add some data table Azure helpers and I will be the buzzword bingo champion!)

by Mike Stall - MSFT at April 24, 2012 06:06 AM

April 23, 2012

Mike Stall

How to create a custom value provider in WebAPI

Here’s how you can easily customize WebAPI parameter binding to include values from source other than the url.  The short answer is that you add a custom ValueProvider and use Model Binding, just like in MVC.

ValueProviders are used to provide values for simple types and match on parameter name.  ValueProviders serve up raw pieces of information and feed into the Model Binders. Model Binders compose that information (eg, building collections or complex types) and do type coercion (eg, string to int, invoke type converters, etc). 

Here’s a custom value provider that extracts information from the request headers.  in this case, our action will get the userAgent and host from the headers. This doesn’t interfere with other parameters, so you can still get the id from the  query string as normal and read the body.

    public class ValueProviderTestController : ApiController
    {
        [HttpGet]
        public object GetStuff(string userAgent, string host, int id)
        {   
            // userAgent and host are bound from the Headers. id is bound from the query string. 
            // This just echos back. Do something interesting instead.
            return string.Format(
@"User agent: {0},
host: {1}
id: {2}", userAgent, host, id);
        }
    }

 

So when I run it and hit it from a browser, I get a string back like so:

User agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0),
host: localhost:8080
id: 45

Note that the client needs to set the headers in the request. Browsers will do this. But if you just call directly with HttpClient.GetAsync(), the headers will be empty.

 

We define a HeaderValueProviderFactory class (source below), which derives from ValueProviderFactory and supplies model binding with the information from the header.

We need to register the header value provider.  We can do it globally in the config, like so:

            // Append our custom valueprovider to the list of value providers.
            config.Services.Add(typeof(ValueProviderFactory), new HeaderValueProviderFactory());

Or we can do it just on a specific parameter without touching global config by using the [ValueProvider] attribute, like so:

    public object GetStuff([ValueProvider(typeof(HeaderValueProviderFactory))] string userAgent)

The [ValueProvider] attribute just derives from the [ModelBinder] attribute and says “use the default model binding, but supply these value providers”.

What’s happening under the hood? For refresher reading, see How WebAPI does parameter binding. In this case, it sees the parameter is a simple type (string), and so it will bind via Model Binding. Model binding gets a list of value providers (either from the attribute or the configuration), and then looks at the name of the parameter (userAgent, host, id) from that list.  Model Binding will also do composition and coercion.

 

 

Sources

WebAPI is an open source project and some of this may need the post-beta source. For example, service resolver has been cleaned up since beta, so it’s now easier to add new services.

Here’s the source for a test client: It includes a loop so that you can hit the service from a browser.

        static void TestHeaderValueProvider()
        {
            string prefix = "http://localhost:8080";
            HttpSelfHostConfiguration config = new HttpSelfHostConfiguration(prefix);
            config.Routes.MapHttpRoute("Default", "{controller}/{action}");

            // Append our custom valueprovider to the list of value providers.
            config.Services.Add(typeof(ValueProviderFactory), new HeaderValueProviderFactory());
            
            HttpSelfHostServer server = new HttpSelfHostServer(config);
            server.OpenAsync().Wait();

            try
            {
                // HttpClient will make the call, but won't set the headers for you. 
                HttpClient client = new HttpClient();
                var response = client.GetAsync(prefix + "/ValueProviderTest/GetStuff?id=20").Result;

                // Browsers will set the headers. 
                // Loop. You can hit the request via: http://localhost:8080/Test2/GetStuff?id=40
                while (true)
                {
                    Thread.Sleep(1000);
                    Console.Write(".");
                }
            }
            finally
            {
                server.CloseAsync().Wait();
            

 

Here’s the full source for the provider.

 

using System.Globalization;
using System.Net.Http.Headers;
using System.Reflection;
using System.Web.Http.Controllers;
using System.Web.Http.ValueProviders;

namespace Basic
{
    // ValueProvideFactory. This is registered in the Service resolver like so:
    //    config.Services.Add(typeof(ValueProviderFactory), new HeaderValueProviderFactory());
    public class HeaderValueProviderFactory : ValueProviderFactory
    {
        public override IValueProvider GetValueProvider(HttpActionContext actionContext)
        {
            HttpRequestHeaders headers = actionContext.ControllerContext.Request.Headers;
            return new HeaderValueProvider(headers);
        }
    }

    // ValueProvider for extracting data from headers for a given request message. 
    public class HeaderValueProvider : IValueProvider
    {
        readonly HttpRequestHeaders _headers;

        public HeaderValueProvider(HttpRequestHeaders headers)
        {
            _headers = headers;
        }

        // Headers doesn't support property bag lookup interface, so grab it with reflection.
        PropertyInfo GetProp(string name)
        {
            var p = typeof(HttpRequestHeaders).GetProperty(name, 
BindingFlags.Instance | BindingFlags.Public | BindingFlags.IgnoreCase); return p; } public bool ContainsPrefix(string prefix) { var p = GetProp(prefix); return p != null; } public ValueProviderResult GetValue(string key) { var p = GetProp(key); if (p != null) { object value = p.GetValue(_headers, null); string s = value.ToString(); // for simplicity, convert to a string return new ValueProviderResult(s, s, CultureInfo.InvariantCulture); } return null; // none } } }

by Mike Stall - MSFT at April 23, 2012 05:29 PM

April 20, 2012

Mike Stall

How to bind to custom objects in action signatures in MVC/WebAPI

MVC provides several ways for binding your own arbitrary parameter types.  I’ll describe some common MVC ways and then show how this applies to WebAPI too. You can view this as a MVC-to-WebAPI migration guide.  (Related reading: How WebAPI binds parameters )

Say we have a complex type, Location, which just has an X and Y. And we want to create that by invoking a Parse(string) function.  The question then becomes: how do I wire up my custom Parse(string) function into WebAPI’s parameter binding system?

Query string: /?loc=123,456  

And then this action gets invoked and the parameter is bound from the query string:

        public object MyAction(Location loc) 
{ // expect that loc.X = 123, loc.Y = 456
}

 

Here’s the C# code for the my Location class, plus the essential parse function:

    // A complex type
    public class Location
    {        
        public int X { get; set; }
        public int Y { get; set; }

        // Parse a string into a Location object. "1,2" --> Loc(X=1,Y=2)
        public static Location TryParse(string input)
        {
            var parts = input.Split(',');
            if (parts.Length != 2)
            {
                return null;
            }

            int x,y;
            if (int.TryParse(parts[0], out x) && int.TryParse(parts[1], out y))
            {
                return new Location { X = x, Y = y };                
            }

            return null;
        }

        public override string ToString()
        {
            return string.Format("{0},{1}", X, Y);
        }
    }

 

Option Fail: what if I do nothing?

If you just define a Location class, but don’t tell WebAPI/MVC about the parse function, it won’t know how to bind it. It may make a best effort, but the Location parameter will be empty.

In WebAPI, we’ll see Location is a complex type, assume it’s coming from the request’s body and so try to invoke a Formatter on it.  WebAPI will search for a formatter that matches the content type and claims to handle the Location type. The formatter likely won’t find anything in the body and leave the parameter empty.

 

Option #1: Manually call the parse function

You can always take the string in the action signature and manually call the parse function yourself.

        public object MyAction1(string loc)
        {
            Location loc2 = Location.TryParse(loc); // explicitly convert string
            // now use loc2 ... 
        }

You can still do this in WebAPI, exactly as is.

What does WebAPI do under the hood? In WebAPI, the string parameter is seen as a simple type, and so it uses model binding to pull ‘loc’ from the query string.

 

Option #2: Use a TypeConverter to make the complex type be simple

Or we can do it with a TypeConverter. This just teachers the model binding system about where to find the Parse() function for the given type.

    public class LocationTypeConverter : TypeConverter
    {
        public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
        {
            if (sourceType == typeof(string))
            {
                return true;
            }
            return base.CanConvertFrom(context, sourceType);
        }

        public override object ConvertFrom(ITypeDescriptorContext context, 
System.Globalization.CultureInfo culture, object value) { if (value is string) { return Location.TryParse((string) value); } return base.ConvertFrom(context, culture, value); } }

And then add the appropriate attribute to the Location’s type declaration:

   [TypeConverter(typeof(LocationTypeConverter))]
   public class Location
   {  ... }

Now in both MVC and WebAPI, your action will get called and the Location parameter is bound:

 

public object MyAction(Location loc)        
{
   // use loc
}

What does WebAPI do under the hood? The presence of a TypeDescriptor that converts from string means that WebAPI classifies this a “simple type”. Simple types use model binding. WebAPI will get ‘loc’ from the query string by matching the parameter name, see the parameter’s type is “Location” and then invoke the TypeConverter to convert from string to Location.

 

Option #3: Use a custom model binder

Another way is to use a custom model binder. This essentially just teachers the model binding system about the Location parse function. There are two key parts here:
  a) defining the model binder and
  b) wiring it up to the system so that it gets used.

Part a) Writing a custom model binder:

Here’s in MVC:

    public class LocationModelBinder : IModelBinder
    {
        public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
        {
            string key = bindingContext.ModelName;
            ValueProviderResult val = bindingContext.ValueProvider.GetValue(key);
            if (val != null)
            {
                string s = val.AttemptedValue as string;
                if (s != null)
                {
                    return Location.TryParse(s);
                }
            }
            return null;
        }
    }

Of course, once you’ve written a custom model binder, you can do a lot more with it than just call a Parse() function. But that’s another topic…

Defining a custom model binder is very similar in WebAPI. We still have a correpsonding IModelBinder interface, and the design pattern is the same, but its signature is slightly different:

    public bool BindModel(HttpActionContext actionContext, ModelBindingContext bindingContext)

MVC takes in a controller context, whereas WebAPI takes in an actionContext (which has a reference to a controller context). And MVC returns the object for the model, whereas WebAPI  returns a bool and sets the model result on the binding context. (As a reminder, WebAPI and MVC share design patterns, but have different types. So while you can often cut and paste code between them, you may need to touch up namespaces)

 

Part B) now we need to wire up the model binder.

In both MVC and WebAPI, there are 3 places you can do this.

1) The highest precedence location is the one closest to the parameter. Just add a [ModelBinder] attribute on the parameter’s signature

        public object  MyAction2(
            [ModelBinder(typeof(LocationModelBinder))]
            Location loc) // Use model binding to convert
        {
            // use loc...
        }

This is the same as WebAPI. (In WebAPI, this was only supported after beta, so if you’re pre-RTM, you’ll need the latest sources)

2) add a [ModelBinder] attribute on the type’s declaration. 

        [ModelBinder(typeof(LocationModelBinder))]
public class Location { ... }

Same as WebAPI, like #1.

3) Change it in a global config setting

In MVC, this is in the global.asax file. An easy way is just like so:

       ModelBinders.Binders.Add(typeof(Location), new LocationModelBinder());            

In WebAPI, registration is on the HttpConfiguration object. Web API strictly goes through the service resolver. WebAPI does have a gotcha that you need to register custom model binders at the front because the default list has MutableObjectModelBinder which zealously claims all types and so would shadow your custom binder if it were just appended to the end.

            
            config.Services.Insert(typeof(System.Web.Http.ModelBinding.ModelBinderProvider), 
0, // Insert at front to ensure other catch-all binders don’t claim it first
new LocationModelBinderProvider());

And then in WebAPI, you still need to add an empty [ModelBinder] attribute on the parameter to tell WebAPI to look in the model binders instead of trying to use  a formatter on it.

The [ModelBinder] doesn’t need to specify the binder type because you provided it in the config object.

        public object  MyAction2([ModelBinder] Location loc) // Use model binding to convert
        {
            // use loc...
        }

What does WebAPI do under the hood? In all 3 cases, WebAPI sees a [ModelBinder] attribute associated with the parameter (either on the Parameter or on the Parameter’s Type’s declaration). The model binder attribute can either supply the binder directly (as in cases #1 and #2) or fetch the binder from the config (case #3). WebAPI then invokes that binder to get a value for the parameter.

 

        

Other places to hook?

WebAPI is very extensible and you could try to hook other places too, but the ones above are the most common and easiest for this scenario. But for completeness sake, I’ll mention a few other options, which I may blog about more later:

  • For example, you could hook the IActionValueBinder (here’s an example of an MVC-style parameter binder), IHttpActionInvoker (to populate right before invoking the action), or even populate parameters through a filter.
  • By default, complex types try to come from the body, and the body is read via Formatters. So you could also try to provide a custom formatter. However, that’s not ideal because in our example, we wanted data from the query string and Formatters can’t read the query string.

by Mike Stall - MSFT at April 20, 2012 09:30 PM

Miguel de Icaza

XNA on Windows 8 Metro

The MonoGame Team has been working on adding Windows 8 Metro support to MonoGame.

This will be of interest to all XNA developers that wanted to target the Metro AppStore, since Microsoft does not plan on supporting XNA on Metro, only on the regular desktop.

The effort is taking place on IRC in the #monogame channel on irc.gnome.org. The code is being worked in the develop3d branch of MonoGame.

by Miguel de Icaza (miguel@gnome.org) at April 20, 2012 03:07 AM

April 19, 2012

Mike Stall

MVC Style parameter binding for WebAPI

I described earlier how WebAPI binds parameters. The entire parameter binding behavior is determined by the IActionValueBinder interface and can be swapped out. The default implementation is DefaultActionValueBinder.

Here’s another IActionValueBinder that provides MVC parameter binding semantics. This lets you do things that you can’t do in WebAPI’s default binder, specifically:

  1. ModelBinds everything, including the body. Assumes the body is FormUrl encoded
  2. This means you can do MVC scenarios where a complex type is bound with one field from the query string and one field from the form data in the body.
  3. Allows multiple parameters to be bound from the body.

 

Brief description of IActionValueBinder

Here’s what IActionValueBinder looks like:

    public interface IActionValueBinder
    {
        HttpActionBinding GetBinding(HttpActionDescriptor actionDescriptor);
    }

This is called to bind the parameters. It returns a  HttpActionBinding object, which is a 1:1 with an ActionDescriptor. It can be cached across requests. The interesting method on that binding object is:

    public virtual Task ExecuteBindingAsync(HttpActionContext actionContext, CancellationToken cancellationToken)

This will execute the bindings for all the parameters, and signal the task when completed. This will invoke model binding, formatters, or any other parameter binding technique. The parameters are added to the actionContext’s parameter dictionary.

You can hook IActionValueBinder to provide your own binding object, which can have full control over binding the parameters. This is a bigger hammer than adding formatters or custom model binders.

You can hook up an IActionValueBinder either through the service resolver of the HttpControllerConfiguration attribute on a controller.

Example usage:

Here’s a an example usage. Suppose you have this code on the server. This is using the HttpControllerConfiguration attribute, and so all of the actions on that controller will use the binder. However, since it’s per-controller, that means it can still peacefully coexist with other controllers on the server.

    public class Customer
    {
        public string name { get; set; }
        public int age { get; set; }
    }

    [HttpControllerConfiguration(ActionValueBinder=typeof(MvcActionValueBinder))]
    public class MvcController : ApiController
    {
        [HttpGet]
        public void Combined(Customer item)
        {
        }
    }

And then here’s the client code to call that same action 3 times, showing the fields coming from different places.

        static void TestMvcController()
        {
            HttpConfiguration config = new HttpConfiguration();
            config.Routes.MapHttpRoute("Default", "{controller}/{action}", new { controller = "Home" });

            HttpServer server = new HttpServer(config);
            HttpClient client = new HttpClient(server);

            // Call the same action. Action has parameter with 2 fields. 

            // Get one field from URI, the other field from body
            {
                HttpRequestMessage request = new HttpRequestMessage
                {
                    Method = HttpMethod.Get,
                    RequestUri = new Uri("http://localhost:8080/Mvc/Combined?age=10"),
                    Content = FormUrlContent("name=Fred")
                };

                var response = client.SendAsync(request).Result;
            }

            // Get both fields from the body
            {
                HttpRequestMessage request = new HttpRequestMessage
                {
                    Method = HttpMethod.Get,
                    RequestUri = new Uri("http://localhost:8080/Mvc/Combined"),
                    Content = FormUrlContent("name=Fred&age=11")
                };

                var response = client.SendAsync(request).Result;
            }

            // Get both fields from the URI
            {
                var response = client.GetAsync("http://localhost:8080/Mvc/Combined?name=Bob&age=20").Result;
            }
        }
        static HttpContent FormUrlContent(string content)
        {
            return new StringContent(content, Encoding.UTF8, "application/x-www-form-urlencoded");
        }

 

The MvcActionValueBinder:

Here’s the actual code for the binder. Under 100 lines.  (Disclaimer: this requires the latest sources. I verified against this change. I had to fix an issue that allowed ValueProviderFactory.GetValueProvider to return null).

Notice that it reads the body once per request, creates a per-request ValueProvider around the form data, and stashes that in request-local-storage so that all of the parameters share the same value provider. This sharing is essential because the body can only be read once.

// Example of MVC-style action value binder.
using System;
using System.Collections.Generic;
using System.Collections.Specialized;
using System.Globalization;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Formatting;
using System.Threading;
using System.Threading.Tasks;
using System.Web.Http;
using System.Web.Http.Controllers;
using System.Web.Http.ModelBinding;
using System.Web.Http.ValueProviders;
using System.Web.Http.ValueProviders.Providers;

namespace Basic
{    
    // Binder with MVC semantics. Treat the body as KeyValue pairs and model bind it. 
    public class MvcActionValueBinder : DefaultActionValueBinder
    {
        // Per-request storage, uses the Request.Properties bag. We need a unique key into the bag. 
        private const string Key = "5DC187FB-BFA0-462A-AB93-9E8036871EC8";

        public override HttpActionBinding GetBinding(HttpActionDescriptor actionDescriptor)
        {
            MvcActionBinding actionBinding = new MvcActionBinding();
                                    
            HttpParameterDescriptor[] parameters = actionDescriptor.GetParameters().ToArray();
            HttpParameterBinding[] binders = Array.ConvertAll(parameters, p => DetermineBinding(actionBinding, p));

            actionBinding.ParameterBindings = binders;
                        
            return actionBinding;            
        }

        private HttpParameterBinding DetermineBinding(MvcActionBinding actionBinding, HttpParameterDescriptor parameter)
        {
            HttpConfiguration config = parameter.Configuration;

            var attr = new ModelBinderAttribute(); // use default settings
            
            ModelBinderProvider provider = attr.GetModelBinderProvider(config);
            IModelBinder binder = provider.GetBinder(config, parameter.ParameterType);

            // Alternatively, we could put this ValueProviderFactory in the global config.
            List<ValueProviderFactory> vpfs = new List<ValueProviderFactory>(attr.GetValueProviderFactories(config));
            vpfs.Add(new BodyValueProviderFactory());

            return new ModelBinderParameterBinding(parameter, binder, vpfs);
        }   

        // Derive from ActionBinding so that we have a chance to read the body once and then share that with all the parameters.
        private class MvcActionBinding : HttpActionBinding
        {                
            // Read the body upfront , add as a ValueProvider
            public override Task ExecuteBindingAsync(HttpActionContext actionContext, CancellationToken cancellationToken)
            {
                HttpRequestMessage request = actionContext.ControllerContext.Request;
                HttpContent content = request.Content;
                if (content != null)
                {
                    FormDataCollection fd = content.ReadAsAsync<FormDataCollection>().Result;
                    if (fd != null)
                    {
                        NameValueCollection nvc = fd.ReadAsNameValueCollection();

                        IValueProvider vp = new NameValueCollectionValueProvider(nvc, CultureInfo.InvariantCulture);

                        request.Properties.Add(Key, vp);
                    }
                }
                        
                return base.ExecuteBindingAsync(actionContext, cancellationToken);
            }
        }

        // Get a value provider over the body. This can be shared by all parameters. 
        // This gets the values computed in MvcActionBinding.
        private class BodyValueProviderFactory : ValueProviderFactory
        {
            public override IValueProvider GetValueProvider(HttpActionContext actionContext)
            {
                object vp;
                actionContext.Request.Properties.TryGetValue(Key, out vp);
                return (IValueProvider)vp; // can be null                
            }
        }
    }
}

--

by Mike Stall - MSFT at April 19, 2012 12:42 AM

April 16, 2012

Mike Stall

How WebAPI does Parameter Binding

Here’s an overview of how WebAPI binds parameters to an action method.  I’ll describe how parameters can be read, the set of rules that determine which technique is used, and then provide some examples.

 

[update] Parameter binding is ultimately about taking a HTTP request and converting it into .NET types so that you can have a better action signature. 

The request message has everything about the request, including the incoming URL with query string, content body, headers, etc.  Eg, without parameter binding, every action would have to take the request message and manually extract the parameters, kind of like this:

public object MyAction(HttpRequestMessage request)
{
        // make explicit calls to get parameters from the request object
        int id = int.Parse(request.RequestUri.ParseQueryString().Get("id")); // need error logic!
        Customer c = request.Content.ReadAsAsync<Customer>().Result; // should be async!
        // Now use id and customer
}
  

That’s ugly, error prone, repeats boiler plate code, is missing corner cases, and hard to unit test. You want the action signature to be something more relevant like:

public object MyAction(int id, Customer c) { }

So how does WebAPI convert from a request message into real parameters like id and customer?

Model Binding vs. Formatters

There are 2 techniques for binding parameters: Model Binding and Formatters. In practice, WebAPI uses model binding to read from the query string and Formatters to read from the body. 

(1) Using Model Binding:

ModelBinding is the same concept as in MVC, which has been written about a fair amount (such as here). Basically, there are “ValueProviders” which supply pieces of data such as query string parameters, and then a model binder assembles those pieces into an object.

(2) Using Formatters:

Formatters (see the MediaTypeFormatter class) are just traditional serializers with extra metadata such as the associated content type. WebAPI gets the list of formatters from the HttpConfiguration, and then uses the request’s content-type to select an appropriate formatter. WebAPI has some default formatters. The default JSON formatter is JSON.Net. There is an Xml formatter and a FormUrl formatter that uses JQuery’s syntax.

The key method is MediaTypeFormatter.ReadFromStreayAsync, which looks :

public virtual Task<object> ReadFromStreamAsync(
Type type,
Stream stream,
HttpContentHeaders contentHeaders,
IFormatterLogger formatterLogger)

Type is the parameter type being read, which is passed to the serializer. Stream is the request’s content stream. The read function then reads the stream, instantiates an object, and returns it.

HttpContentHeaders are just from the request message. IFormatterLogger is a callback interface that a formatter can use to log errors while reading (eg, malformed data for the given type).

Both model binding and formatters support validation and log rich error information.  However, model binding is significantly more flexible.

When do we use which?

Here are the basic rules to determine whether a parameter is read with model binding or a formatter:

  1. If the parameter has no attribute on it, then the decision is made purely on the parameter’s .NET type. “Simple types” uses model binding. Complex types uses the formatters. A “simple type” includes: primitives, TimeSpan, DateTime, Guid, Decimal, String, or something with a TypeConverter that converts from strings.
  2. You can use a [FromBody] attribute to specify that a parameter should be from the body.
  3. You can use a [ModelBinder] attribute on the parameter or the parameter’s type to specify that a parameter should be model bound. This attribute also lets you configure the model binder.  [FromUri] is a derived instance of [ModelBinder] that specifically configures a model binder to only look in the URI.
  4. The body can only be read once.  So if you have 2 complex types in the signature, at least one of them must have a [ModelBinder] attribute on it.

It was  a key design goal for these rules to be static and predictable.

Only one thing can read the body

A key difference between MVC and WebAPI is that MVC buffers the content (eg, request body). This means that MVC’s parameter binding can repeatedly search through the body to look for pieces of the parameters. Whereas in WebAPI, the request body (an HttpContent) may be a read-only, infinite, non-buffered, non-rewindable stream.

That means that parameter binding needs to be very careful about not reading the stream unless it’s guaranteeing to bind a parameter.  The action body may want to read the stream directly, and so WebAPI can’t assume that it owns the stream for parameter binding.  Consider this example action:

   
        // Action saves the request’s content into an Azure blob 
        public Task PostUploadfile(string destinationBlobName)
        {
            // string should come from URL, we’ll read content body ourselves.
            Stream azureStream = OpenAzureStorage(destinationBlobName); // stream to write to azure
            return this.Request.Content.CopyToStream(azureStream); // upload body contents to azure. 
        }

The parameter is a simple type, and so it’s pulled from the query string. Since there are no complex types in the action signature, webAPI never even touches the request content stream, and so the action body can freely read it.

Some examples

Here are some examples of various requests and how they map to action signatures.

/?id=123&name=bob
void Action(int id, string name) // both parameters are simple types and will come from url

 

/?id=123&name=bob
void Action([FromUri] int id, [FromUri] string name) // paranoid version of above.

void Action([FromBody] string name); // explicitly read the body as a string.

public class Customer {   // a complex object
  public string Name { get; set; }
  public int Age { get; set; }
}

/?id=123
void Action(int id, Customer c) // id from query string, c is a complex object, comes from body via a formatter.

void Action(Customer c1, Customer c2) // error! multiple parameters attempting to read from the body

void Action([FromUri] Customer c1, Customer c2) // ok, c1 is from the URI and c2 is from the body

void Action([ModelBinder(MyCustomBinder)] SomeType c) // Specifies a precise model binder to use to create the parameter.

[ModelBinder(MyCustomBinder)] public class SomeType { } // place attribute on type declaration to apply to all parameter instances
void Action(SomeType c) // attribute on c’s declaration means it uses model binding.

Differences with MVC

Here are some differences between MVC and WebAPI’s parameter binding:

  1. MVC only had model binders and no formatters. That’s because MVC would model bind over the request’s body (which it commonly expected to just be FormUrl encoded), whereas WebAPI uses a serializer over the request’s body.
  2. MVC buffered the request body, and so could easily feed it into model binding. WebAPI does not buffer the request body, and so does not model bind against the request body by default.
  3. WebAPI’s binding can be determined entirely statically based off the action signature types. For example, in WebAPI, you know statically whether a parameter will bind against the body or the query string. Whereas in MVC, the model binding system would search both body and query string.

by Mike Stall - MSFT at April 16, 2012 09:33 PM

April 14, 2012

Miguel de Icaza

Contributing to Mono 4.5 Support

For a couple of weeks I have been holding off on posting about how to contribute to Mono, since I did not have a good place to point people to.

Gonzalo has just updated our Status pages to include the differences betwee .NET 4.0 to .NET 4.5, these provide a useful roadmap for features that should be added to Mono.

This in particular in the context of ASP.NET 4.5, please join us in mono-devel-list@lists.ximian.com.

by Miguel de Icaza (miguel@gnome.org) at April 14, 2012 02:54 AM

April 11, 2012

Miguel de Icaza

Modest Proposal for C#

This is a trivial change to implement, and would turn what today is an error into useful behavior.

Consider the following C# program:

struct Rect {
	public int X, Y, Width, Height;
}

class Window {
	Rect bounds;

	public Rect Bounds {
		get { return bounds; }
		set {
			// Some code that needs to run when the	property is set
			WindowManager.Invalidate (bounds);
			WindowManager.Invalidate (value);
			bounds = value;
		}
	}
}

Currently, code like this:

Window w = new Window ();
w.Bounds.X = 10;

Produces the error:

Cannot modify the return value of "Window.Bounds.X" because it is not a variable

The reason is that the compiler returns a copy of the "bounds" structure and making changes to the returned value has no effect on the original property.

If we had used a public field for Bounds, instead of a property, the above code would compile, as the compiler knows how to get to the "Bounds.X" field and set its value.

My suggestion is to alter the C# compiler to turn what today is considered an error when accessing properties and doing what the developer expects.

The compiler would rewrite the above code into:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
w.Bounds = tmp;

Additionally, it should cluster all of the changes done in a single call, so:

Window w = new Window ();
w.Bounds.X = 10;
w.Bounds.Y = 20;

Will be compiled as:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
tmp.Y = 20;
w.Bounds = tmp;

To avoid calling the setter for each property set in the underlying structure.

The change is trivial and wont break any existing code.

by Miguel de Icaza (miguel@gnome.org) at April 11, 2012 08:31 PM

April 05, 2012

Jeff Hardy's Blog (NWSGI)

IronPython Samples

One thing that I think has been missing from IronPython for a while now is a set of embedding samples. There are many host environments that IronPython can run in, and while they are all similar they have some differences to.


To correct this, I put together a set of IronPython Samples demonstrating how to embed IronPython in a console, WinForms, and WPF app, as well as writing a complete WPF app in IronPython.

Any feedback (and pull requests!) is welcome. In particular, I'd like to know what other platforms people are interested: Android, Windows Phone, Silverlight, ASP.NET, etc.

by jdhardy (noreply@blogger.com) at April 05, 2012 03:37 AM

April 04, 2012

Miguel de Icaza

Can JITs be faster?

Herb Sutter discusses in his Reader QA: When Will Better JITs save Managed Code?:

In the meantime, short answer: C++ and managed languages make different fundamental tradeoffs that opt for either performance or productivity when they are in tension.

[...]

This is a 199x/200x meme that’s hard to kill – “just wait for the next generation of (JIT or static) compilers and then managed languages will be as efficient.” Yes, I fully expect C# and Java compilers to keep improving – both JIT and NGEN-like static compilers. But no, they won’t erase the efficiency difference with native code, for two reasons.

First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency. (This is the opposite of C++, which has added a lot of productivity-oriented features like auto and lambdas in the latest standard, but never at the expense of performance efficiency.) In particular, managed languages chose to incur costs even for programs that don’t need or use a given feature; the major examples are assumption/reliance on always-on or default-on garbage collection, a virtual machine runtime, and metadata.

This is a pretty accurate statement on the difference of the mainstream VMs for managed languages (.NET, Java and Javascript).

Designers of managed languages have chosen the path of safety over performance for their designs. For example, accessing elements outside the boundaries of an array is an invalid operation that terminates program execution, as opposed to crashing or creating an exploitable security hole.

But I have an issue with these statements:

Second, even if JIT were the only big issue, a JIT can never be as good as a regular optimizing compiler because a JIT compiler is in the business of being fast, not in the business of generating optimal code. Yes, JITters can target the user’s actual hardware and theoretically take advantage of a specific instruction set and such, but at best that’s a theoretical advantage of NGEN approaches (specifically, installation-time compilation), not JIT, because a JIT has no time to take much advantage of that knowledge, or do much of anything besides translation and code gen.

In general the statement is correct when it comes to early Just-in-Time compilers and perhaps reflects Microsoft's .NET JIT compiler, but this does not apply to state of the art JIT compilers.

Compilers are tools that convert human readable text into machine code. The simplest ones perform straight forward translations from the human readable text into machine code, and typically go through or more of these phases:

Optimizing compilers introduce a series of steps that alter their inputs to ensure that the semantics described by the user are preserved while generating better code:

An optimization that could be performed on the high-level representation would transform the textual "5 * 4" in the source code into the constant 20. This is an easy optimization that can be done up-front. Simple dead code elimination based on constant folding like "if (1 == 2) { ... }" can also be trivially done at this level.

An optimization on the medium representation would analyze the use of variables and could merge subexpressions that are computed more than once, for example:

	int j = (a*b) + (a*b)

Would be transformed by the compiler into:

	int _tmp = a * b;
	int j = _tmp + _tmp;

A low-level optimization would alter a "MULTIPLY REGISTER-1 BY 2" instruction into "SHIFT REGISTER-1 ONE BIT TO THE LEFT".

JIT compilers for Java and .NET essentially break the compilation process in two. They serialize the data in the compiler pipeline and split the process in two. The first part of the process dumps the result into a .dll or .class files:

The second step loads this file and generates the native code. This is similar to purchasing frozen foods from the super market, you unwrap the pie, shove it in the oven and wait 15 minutes:

Saving the intermediate representation and shipping it off to a new system is not a new idea. The TenDRA C and C++ compilers did this. These compilers saved their intermediate representation into an architecture neutral format called ANDF, similar in spirit to .NET's Common Intermediate Language and Java's bytecode. TenDRA used to have an installer program which was essentially a compiler for the target architecture that turned ANDF into native code.

Essentially JIT compilers have the same information than a batch compiler has today. For a JIT compiler, the problem comes down to striking a balance between the quality of the generated code and the time it takes to generate the code.

JIT compilers tend to go for fast compile times over quality of the generated code. Mono allows users to configure this threshold by allowing users to pick the optimization level defaults and even lets them pick LLVM to perform the heavy duty optimizations on the code. Slow, but the generated code quality is the same code quality you get from LLVM with C.

Java HotSpot takes a fascinating approach: they do a quick compilation on the first pass, but if the VM detects that a piece of code is being used a lot, the VM recompiles the code with all the optimization turned on and then they hot-swap the code.

.NET has a precompiler called NGen, and Mono allows the --aot flag to be passed to perform the equivalent process that TenDRA's installer did. They precompile the code tuned for the current hardware architecture to avoid having the JIT compiler spend time at runtime translating .NET CIL code to native code.

In Mono's case, you can use the LLVM optimizing compiler as the backend for precompiling code, which produces great code. This is the same compiler that Apple now uses on Lion and as LLVM improves, Mono's generated code improves.

NGen has a few limitations in the quality of the code that it can produce. Unlike Mono, NGen acts merely as a pre-compiler and tests suggest that there are very limited extra optimizations applied. I believe NGen's limitations are caused by .NET's Code Access Security feature which Mono never implemented [1].

[1] Mono only supports the CoreCLR security system, but that is an opt-in feature that is not enabled for desktop/server/mobile use. A special set of assemblies are shipped to support this.

Optimizing JIT compilation for Managed Languages

Java, JavaScript and .NET have chosen a path of productivity and safety over raw performance.

This means that they provide automatic memory management, arrays bounds checking and resource tracking. Those are really the elements that affect the raw performance of these languages.

There are several areas in which managed runtimes can evolve to improve their performance. They wont ever match the performance of hand-written assembly language code, but here are some areas that managed runtimes can work on to improve performance:

>Alias analysis is simpler as arrays are accessed with array operations instead of pointer arithmetic.

Intent: with the introduction of LINQ in C#, developers can shift their attention from how a particular task is done to expressing the desired outcome of an operation. For example:

var biggerThan10 = new List;
for (int i = 0; i < array.Length; i++){
    if (array [i] > 10)
       biggerThan10.Add (i);
}	
	

Can be expressed now as:

var biggerThan10 = x.Where (x => x > 10).Select (x=>x);
	
// with LINQ syntax:
var biggerThan10 = from x in array where x > 10 select x;

Both managed compilers and JIT compilers can take advantage of the rich information that is preserved to turn the expressed intent into an optimized version of the code.

Extend VMs: Just like Javascript was recently extended to support strongly typed arrays to improve performance, both .NET and Java can be extended to allow fewer features to be supported at the expense of safety.

.NET could allow developers to run without the CAS sandbox and without AppDomains (like Mono does).

Both .NET and Java could offer "unsafe" sections that would allow performance critical code to not enforce arrays-bounds-optimization (at the expense of crashing or creating a security gap, this can be done today in Mono by using -O=unsafe).

.NET and Mono could provide allocation primitives that allocate objects on a particular heap or memory pool:

	var pool = MemoryPool.Allocate (1024*1024);

	// Allocate the TerrainMesh in the specified memory pool
	var p = new pool, TerrainMesh ();

	[...]
	
	// Release all objects from the pool, all references are
	// nulled out
	//
	Assert.NotEquals (p, null);
	pool.Destroy ();
	Assert.Equals (p, null);
	

Limiting Dynamic Features: Current JIT compilers for Java and .NET have to deal with the fact that code can be extended dynamically by either loading code at runtime or generating code dynamically.

HotSpot leverages its code recompiled to implement sophisticated techniques to perform devirtualization safely.

On iOS and other platforms it is not possible to generate code dynamically, so code generators could trivially devirtualize, inline certain operations and drop features from both their runtimes and the generated code.

More Intrinsics: An easy optimization that JIT engines can do is map common constructs into native features. For example, we recently inlined the use of ThreadLocal<T> variables. Many Math.* methods can be inlined, and we applied this technique to Mono.SIMD.

by Miguel de Icaza (miguel@gnome.org) at April 04, 2012 08:53 PM

March 31, 2012

Mike Stall

ASP.Net WebAPI

I recently joined the ASP.Net team and have been working on WebAPI, which is a new .NET MVC-like framework for building HTTP web services. (This is certainly a change of pace from my previous life in the world of compilers and debuggers, but I’m having a blast )

ScottGu gave a nice overview of WebAPI here and just announced that WebAPI  has gone open source on Codeplex with GIT.  It’s nice to be able to check in a feature and then immediately blog about it.

A discussion forum for WebAPI is here. The codeplex site is here.

by Mike Stall - MSFT at March 31, 2012 03:48 AM

March 29, 2012

Miguel de Icaza

Microsoft's new Open Sourced Stacks

Yesterday Microsoft announced that another component of .NET would be open sourced. The entire ASP.NET MVC stack is now open source, including the Razor Engine, System.Json, Web API and WebPages.

With this release, they will start accepting external contributions to these products and will be running the project like other open source projects are.

Mono and the new Stacks

We imported a copy of the git tree from Codeplex into GitHub's Mono organization in the aspnetwebstack module.

The mono module itself has now taken a dependency on this module, so the next time that you run autogen.sh in Mono, you will get a copy of the aspnetwebstack inside Mono.

As of today, we replaced our System.Json implementation (which was originally built for Moonlight) and replaced it with Microsoft's implementation.

Other libraries like Razor are next, as those are trivially imported into Mono. But ASP.NET MVC 4 itself will have to wait since it depends on extending our own core ASP.NET stack to add asynchronous support.

Our github copy will contain mostly changes to integrate the stack with Mono. If there are any changes worth integrating upstream, we will submit the code directly to Microsoft for inclusion. If you want to experiment with ASP.NET Web Stack, you should do this with your own work and work directly with the upstream maintainers.

Extending Mono's ASP.NET Engine

The new ASP.NET engine has been upgraded to support C# 5.0 asynchronous programming and this change will require a number of changes to the core ASP.NET.

We currently are not aware of anyone working on extending our ASP.NET core engine to add these features, but those of us in the Mono world would love to assist enthusiastic new developers of people that love async programming to bring these features to Mono.

by Miguel de Icaza (miguel@gnome.org) at March 29, 2012 01:20 AM

March 28, 2012

The Voidspace Techie Blog

unittest.mock and mock 1.0 alpha 1

One of the results of the Python Language Summit at PyCon 2012 is that mock is now in the Python standard library. In Python 3.3 mock is available as unittest.mock. ... [501 words]

March 28, 2012 05:15 PM

March 24, 2012

Mike Stall

OpenSource CSV Reader on Nuget

I did some volunteer work a few years ago that required processing lots of CSV files. So I solved the problems by writing a C# CSV reader, which I wanted to share here. The basic features here are:

  1. be easy to use
  2. read and write CSV files (and support tab and “|” delimiters too)
  3. create CSV files around IEnumerable<T>, dictionaries, and other sources.
  4. Provide a “linq to CSV” experience
  5. provide both in-memory mutable tables and streaming over large data sources (thank you polymorphism!)
  6. provide basic analysis operations like histogram, join, find duplicates, etc. The operations I implemented were driven entirely by the goals I had for my volunteer work.
  7. Read from Excel
  8. Work with Azure. (This primarily means no foolish dependencies, and support TextReader/TextWriter instead of always hitting the file system)

I went ahead and put it on github  at https://github.com/MikeStall/DataTable. And it’s available for download via Nuget (see “CsvTools”).  It’s nice to share, and maybe somebody else will find this useful. But selfishly, I’ve used this library for quite a few tasks over the years and putting it on Github and Nuget also makes it easier for me to find for future projects.

There are the obvious disclaimers here that this was just a casual side project I did as a volunteer, and so use as is.

Step 1: Install “CsvTools” via Nuget:

When you right click on the project references node, just select “Add Library Package Reference”. That will bring up the nuget dialog which will search the online repository for packages. Search for “CsvTools” and then you can instantly install it. It’s built against CLR 4.0, but has no additional dependencies.

image

 

Example 1: Loading from a CSV file

Here’s a CSV at: c:\temp\test.csv

name, species
Kermit, Frog
Ms. Piggy, Pig
Fozzy, Bear

To open and print the contents of the file:

using System;
using DataAccess; // namespace that Csv reader lives in

class Program
{
    static void Main(string[] args)
    {
        DataTable dt = DataTable.New.ReadCsv(@"C:\temp\test.csv");

        // Query via the DataTable.Rows enumeration.
        foreach (Row row in dt.Rows)
        {
            Console.WriteLine(row["name"]);
        }        
    }
}

There are a bunch of extension methods hanging off “DataTable.New” to provide different ways of loading a table. ReadCsv will load everything into memory, which allows mutation operations (see below).  But this also supports streaming operations via the methods with “lazy” in their name, such as ReadLazy().

Example 2: Creating a CSV from an IEnumerable<T> and saving back to a file

Here’s creating a table from an IEnumerable<T>, and then saving that back to a TextWriter (in this case, Console.Out).

var vals = from i in Enumerable.Range(1, 10) select new { N = i, NSquared = i * i };
DataTable dt = DataTable.New.FromEnumerable(vals);
dt.SaveToStream(Console.Out);  


Which produces this CSV:

N,NSquared
1,1
2,4
3,9
4,16
5,25
6,36
7,49
8,64
9,81
10,100

 

Example 3: Mutations

DataTable is actually an abstract base class. There are two primary derived classes:

  1. MutableDataTable,, which loads everything into memory, stores it in column major order, and provides mutation operations.
  2. streaming data table, which provides streaming access over a rows. This is obviously row major order, and doesn’t support mutation. The streaming classes are non-public derived classes of DataTable.

Most of the builder functions that load in memory actually return the derived MutableDataTable object anyways. A MutableDataTable is conceptually a giant 2d string array stored in column major order. So adding new columns or rearranging columns is cheap. Adding rows is expensive. Here’s an example of some mutations:

static void Main(string[] args)
{
    MutableDataTable dt = DataTable.New.ReadCsv(@"C:\temp\test.csv");

    // Mutations
    dt.ApplyToColumn("name", originalValue => originalValue.ToUpper());
    dt.RenameColumn(oldName:"species", newName: "kind");
    
    
    int id = 0;
    dt.CreateColumn("id#", row => { id++; return id.ToString(); });

    dt.GetRow(1)["kind"] = "Pig!!"; // update in place by row
    dt.Columns[0].Values[2] = "Fozzy!!"; // update by column

    // Print out new table
    dt.SaveToStream(Console.Out);        
}

Produces and prints this table:

name,kind,id#
KERMIT,Frog,1
MS. PIGGY,Pig!!,2
Fozzy!!,Bear,3

 

There’s a builder function, DataTable.New.GetMutableCopy, which produces a mutable copy from an arbitrary DataTable.

Example 4: Analysis

I needed some basic analysis functions, like join, histogram, select duplicates, sample, and where. These sit as static methods in the Analyze class.

Here’s an example of creating a table with random numbers, and then printing the histogram:

static void Main(string[] args)
{   
    // Get a table of 1000 random numbers
    Random r = new Random();
    DataTable dt = DataTable.New.FromEnumerable(
        from x in Enumerable.Range(1, 1000) 
        select r.Next(1, 10));

    Tuple<string,int>[] hist = Analyze.AsHistogram(dt, columnIdx: 0);
    
    // Convert the tuple[] to a table for easy printing
    DataTable histTable = DataTable.New.FromTuple(hist, 
        columnName1: "value",
        columnName2: "frequency");
    histTable.SaveToStream(Console.Out);
}

Produces this result:

value,frequency
9,151
8,124
2,118
7,110
3,107
5,104
1,101
6,99
4,86

by Mike Stall - MSFT at March 24, 2012 04:31 PM

March 22, 2012

Miguel de Icaza

Mono 2.11.0 is out

After more than a year of development, we are happy to announce Mono 2.11, the first in a series of beta releases that will lead to the next 2.12 stable release.

Continuous Integration

To assist those helping us with testing the release, we have setup a new continuous build system that builds packages for Mac, OpenSUSE and Windows at http://wrench.mono-project.com/Wrench.

Packages

To test drive Mono 2.11 head to our our downloads page and select the "Alpha" section of the page to get the packages for Mac, Windows or Linux.

The Linux version is split up in multiple packages.

The Windows version ships with Gtk+ and Gtk#

The Mac version ships with Gtk+, Gtk#, F#, IronPython and IronRuby and comes in two versions: Mono Runtime Environment (MRE) and the more complete Mono Development Kit (MDK).

At this stage, we recommend that users get the complete kit.

Runtime Improvements in Mono 2.11

There are hundreds of new features available in this release as we have accumulated them over a very long time. Every fix that has gone into the Mono 2.10.xx series has been integrated into this release.

In addition, here are some of the highlights of this release.

Garbage Collector: Our SGen garbage collector is now considered production quality and is in use by Xamarin's own commercial products.

The collector on multi-CPU systems will also distribute various tasks across the CPUs, it is no longer limited to the marking phase.

The guide Working with SGen will help developers tune the collector for their needs and discusses tricks that developers can take advantage of.

ThreadLocal<T> is now inlined by the runtime engine, speeding up many threaded applications.

Full Unicode Surrogate Support this was a long standing feature and has now been implemented.

C# 5.0 -- Async Support

Mono 2.11 implements the C# 5.0 language with complete support for async programming.

The Mono's class libraries have been updated to better support async programming. See the section "4.5 API" for more details.

C# Backend Rewrite

The compiler code generation backend was rewritten entirely to support both IKVM.Reflection and System.Reflection which allowed us to unify all the old compilers (mcs, gmcs, dmcs and smcs) into a single compiler: mcs. For more information see Backend Rewrite.

The new IKVM.Reflection backend allows the compiler to consume any mscorlib.dll library, instead of being limited to the ones that were custom built/crafted for Mono.

In addition, the compiler is no longer a big set of static classes, instead the entire compiler is instance based, allowing multiple instances of the compiler to co-exist at the same time.

Compiler as a Service

Mono's Compiler as a Service has been extended significantly and reuses the compiler's fully instance based approach (see Instance API for more details).

Mono's compiler as a service is still a low-level API to the C# compiler. The NRefactory2 framework --shared by SharpDevelop and MonoDevelop-- provides a higher level abstraction that can be -- used by IDEs and other high-level tools.

C# Shell

Our C# interactive shell and our C# API to compile C# code can in addition to compiling expressions and statements can now compile class definitions.

4.5 API

4.5 Profile Mono now defaults to the 4.5 profile which is a strict superset of the 4.0 profile and reuses the same version number for the assemblies.

Although .NET 4.5 has not yet been officially released, the compiler now defaults to the 4.5 API, if you want to use different profile API you must use the -sdk:XXX switch to the command line compiler.

Because 4.5 API is a strict superset of 4.0 API they both share the same assembly version number, so we actually install the 4.5 library into the GAC.

Some of the changes in the 4.5 API family include:

  • New Async methods
  • WinRT compatibility API
  • Newly introduced assemblies: System.Net.Http, System.Threading.Tasks.Dataflow

The new System.Net.Http stack is ideal for developers using the C# 5.0 async framework.

Debugging

The GDB support has been extended and can pretty print more internal variables of Mono as well as understanding SGen internals.

The soft debugger has seen a large set of improvements:

  • Single stepping is now implemented using breakpoints in most cases, speeding it up considerably.
  • Calls to System.Diagnostics.Debugger:Log()/Break () are now routed to the debugger using new UserLog/UserBreak event types.
  • S390x is now supported (Neale Ferguson).
  • MIPS is now supported.
  • Added new methods to Mono.Debugger.Soft and the runtime to decrease the amount of packets transmitted between the debugger and the debuggee. This significantly improves performance over high latency connections like USB.

Mac Support

Mac support has been vastly extended, from faster GC by using native Mach primitives to improves many features that previously only worked on Linux to extending the asynchronous socket support in Mono to use MacOS X specific primitives.

New Ports

We have completed the Mono MIPS port.

Performance

As a general theme, Mono 2.11 has hundreds of performance improvements in many small places which add up.

by Miguel de Icaza (miguel@gnome.org) at March 22, 2012 07:33 PM

March 17, 2012

Miguel de Icaza

Mono and Google Summer of Code

Students, get your pencils ready for an intense summer of hacking with the Google Summer of Code and Mono!

Check out the Mono organization Summer of Code Project site.

by Miguel de Icaza (miguel@gnome.org) at March 17, 2012 12:41 AM

March 16, 2012

Miguel de Icaza

Cross Platform Game Development in C#

If you missed the live session on Cross Platform Game Development in C# from AltDevConf you can now watch presentation.

You can also check the videos for all the AltDevConf presentations.

by Miguel de Icaza (miguel@gnome.org) at March 16, 2012 09:33 PM

March 13, 2012

Jeff Hardy's Blog (NWSGI)

Mea Culpa

I was so excited about getting IronPython 2.7.2 out the door, I briefly dropped my common sense and made a change to IronPython that never should have been made without triggering another RC release. So what the hell happened?

The change in question is f8cce37. The correction is 4a76497.

What Broke?

The property in question – MaybeNotImplemented – checks to see if a method’s return type has the MaybeNotImplemented attribute, which tells IronPython that the operator may return NotImplemented; this indicates that the attempted operation doesn’t work and that other options should be tried. Without specifying [return:MaybeNotImplemented] on a native method, IronPython won’t generate code to perform the other operations.

Windows Phone Fixes

The issue that triggered the initial change was #32374. This is interesting in itself, as it turns out that MethodInfo.ReturnParameter is not supported on Windows Phone 7 – it exists, and it compiles, but it throws NotSupportException. Joy.

Since mobile support was new, I figured that making a change specific to Windows Phone should be OK. And it probably would have been, had I done what I originally intended and put the new WP7 code in an #if block and left the original code intact. But instead I decided that if the new code worked for both, why not use it?

Static Typing is Not Enough

Notice how small the fix is? MethodInfo.ReturnTypeCustomAttributes returns an ICustomAttributeProvider, which has the IsDefined method. As it turns out, MethodInfo also implements ICustomAttributeProvider. This means that the original fix compiled, ran, and worked for most cases, but failed on others. And they failed in the worst possible way – silently (except for the part where the program breaks).

But but but … TESTS!

Yes, the tests should have caught it. Unfortunately IronPython has been running for a while without paying much attention to the state of the tests, and there’s really no one to blame for this except me. Most of the CPython standard library tests fail at some point or another, which drowns out the useful failures in a sea of noise. This, of course, has to change, so for 2.7.3 I’m focusing on the much smaller set of tests specifically for IronPython (not those inherited from CPython).

After this, there’s no way 2.7.3 is going out without at least that baseline set of tests green, and once I get them in order all new contributions will have to pass the relevant tests. This should be the only time that I have to do an emergency release.

In the meantime, IronPython 2.7.2.1 is available, which fixes this issue.

by jdhardy (noreply@blogger.com) at March 13, 2012 07:28 AM

March 08, 2012

Hex Dump

I am speaking at IIUG 2012 about using Python with Informix

The 2012 IIUG (International Informix User Group) conference will be in San Diego, California from April 22 - 25 2012. All three of my talk proposals have been accepted, and one of these is about using Python with Informix. As well as preparing my presentation, I have been working on a number of Python Open Source projects either adding or improving their support for Informix access. So hopefully

by Mark Rees (noreply@blogger.com) at March 08, 2012 05:21 AM

March 06, 2012

Miguel de Icaza

Working With SGen

As SGen becomes the preferred garbage collector for Mono, I put together the Working With SGen document. This document is intended to explain the options that as a developer you can tune in SGen as well as some practices that you can adopt in your application to improve your application performance.

This document is a complement to the low-level implementation details that we had previously posted.

by Miguel de Icaza (miguel@gnome.org) at March 06, 2012 12:18 AM

March 05, 2012

Miguel de Icaza

Gtk+ and MacOS X

We have released a new build of Mono 2.10.9 (Beta) with the latest version of Gtk+2 containing dozens of bug fixes done by the Lanedo team to improve the quality of Mono/Gtk+ apps on OSX.

This is still a beta release, please take it out for a spin, we are almost ready to graduate this as our stable Mono package.

by Miguel de Icaza (miguel@gnome.org) at March 05, 2012 09:20 PM

Miguel de Icaza

Phalanger's PHP on Mono/.NET Updates

The Phalanger developers have published an updated set of benchmarks of their PHP compiler running on top of .NET vs PHP and Cached PHP, and the results are impressive:

There are two cases on the language shootout where they are slower than PHP (out of eighteen cases) and they are also slower on eight of thirtyone microbenchmarks.

But in general with real applications like WordPress and MediaWiki, the performance gains are impressive.

by Miguel de Icaza (miguel@gnome.org) at March 05, 2012 09:18 PM

The Voidspace Techie Blog

Ergonomics: Kinesis Freestyle Keyboard and Evoluent Vertical Mouse

I've been using computers for a long time, and for most of that time I've been using them for the whole of the working day and often the rest of the day too. A few years ago I started getting pains in my wrists (classic programmer's RSI) and began using wrist rests and an ergonomic keyboard. ... [1783 words]

March 05, 2012 09:16 AM

February 29, 2012

The Voidspace Techie Blog

Tests that fail one day every four years

Some code looks harmless but has hidden bugs lurking in its nether regions. Code that handles dates is notorious for this, and this being February 29th (the coders' halloween) it's time for the bugs to come crawling out of the woodwork. ... [230 words]

February 29, 2012 02:24 PM

February 16, 2012

The Voidspace Techie Blog

mock 0.8 released

After more than six months development work mock 0.8 has been released. 0.8 is a big release with many new features, general improvements and bugfixes. ... [968 words]

February 16, 2012 11:56 AM

February 12, 2012

Miguel de Icaza

February 10, 2012

Miguel de Icaza

C# for Gaming: AltDevConf This Weekend

It is a great honor to participate this weekend on the online AltDevConf conference. This is an online two-day event

Our goal is twofold: To provide free access to a comprehensive selection of game development topics taught by leading industry experts, and to create a space where bright and innovative voices can also be heard. We are able to do this, because as an online conference we are not subject to the same logistic and economic constrains imposed by the traditional conference model.

I will be participating in the talk on Cross Platform Game Development using C# with Matthieu Laban and Philippe Rollin.

You can register here for our session on Saturday at 3pm Eastern Standard Time, noon Pacific Time, and 9pm Paris Time.

If you are located in the Paris time zone, that means that you get to enjoy the talk sipping a tasty hot chocolate with some tasty baguettes.

by Miguel de Icaza (miguel@gnome.org) at February 10, 2012 12:59 AM

January 22, 2012

The Voidspace Techie Blog

Callable object with state using generators

It's often convenient to create callable objects that maintain some kind of state. In Python we can do this with objects that implement the __call__ method and store the state as instance attributes. ... [597 words]

January 22, 2012 02:05 PM

January 13, 2012

The Voidspace Techie Blog

Simple mocking of open as a context manager

Using open as a context manager is a great way to ensure your file handles are closed properly and is becoming common: with open('/some/path', 'w') as f: f.write('something') The issue is that even if you mock out the call to open it is the returned object that is used as a context manager (and has __enter__ and __exit__ called). Using MagicMock from the mock library, we can mock out context managers very simply. ... [320 words]

January 13, 2012 11:18 AM

January 12, 2012

The Voidspace Techie Blog

Mocks with some attributes not present

Mock objects, from the mock library, create attributes on demand. This allows them to pretend to be objects of any type. ... [199 words]

January 12, 2012 11:33 AM

January 11, 2012

The Voidspace Techie Blog

mock 0.8rc2: new release and development docs

I've pushed out a new release of mock. This fixes an inconsistency in the create_autospec api I discovered whilst working on the docs (yes I've really been working on the docs), and a fix for a bug with using ANY. ... [190 words]

January 11, 2012 01:13 AM

January 03, 2012

The Voidspace Techie Blog

Python on Google Plus

As you may (or perhaps not) have noticed, I've been blogging a lot less in the last year. A new job with Canonical (although I've been there over a year now) and an eight month old daughter all make blogging harder. ... [83 words]

January 03, 2012 10:41 AM

December 31, 2011

The Voidspace Techie Blog

Sphinx doctests and the execution namespace

I've finally started work on the documentation for mock 0.8 release, and much of it involves converting the write-ups I did in the blog entries. The mock documentation is built with the excellent Sphinx (of course!) ... [402 words]

December 31, 2011 11:28 PM

December 29, 2011

The Voidspace Techie Blog

mock 0.8 release candidate 1 and handling mutable arguments

I've released mock 0.8 release candidate 1. You can download it it or install it with: pip install -U mock==dev mock is a library for testing in Python. ... [613 words]

December 29, 2011 12:04 PM

December 22, 2011

Miguel de Icaza

Mono in 2011

This was a very interesting year for Mono, and I wanted to capture some of the major milestones and news from the project as well as sharing a bit of what is coming up for Mono in 2012.

I used to be able to list all of the major applications and great projects built with Mono. The user base has grown so large that I am no longer able to do this. 2011 was a year that showed an explosion of applications built with Mono.

In this post I list a few of the high profile projects, but it is by no means an extensive list. There are too many great products and amazing technologies being built with Mono, but a comprehensive list would take too long to assemble.

Xamarin

The largest event for Mono this year was that the team working on Mono technologies at Novell was laid off after Novell was acquired.

We got back on our feet, and two weeks after the layoffs had taken place, the original Mono team incorporated as Xamarin.

Xamarin's goal is to deliver great productivity and great tools for mobile developers. Our main products are Mono on iOS and Mono on Android.

These products are built on top of the open source Mono project and the MonoDevelop project. We continue to contribute extensively to these two open source projects.

Launching Xamarin was a huge effort for all of us.

Xamarin would not have been possible without our great customers and friends in the industry. Many people cared deeply about the technology and helped us get up and running.

In July, we announced an agreement with Attachmate that ensured a bright future for our young company.

A couple of days later, we were ready to sell the mobile products that had been previously developed at Novell, and we started to provide all existing Novell customers with ongoing support for their Mono-based products.

Half a year later, we grew the company and continued to do what we like the most: writing amazing software.

Meanwhile, our users have created amazing mobile applications. You can see some of those in our App Catalog.

C# Everywhere

On the Mobile Space: This year Sony jumped to C# in a big way with the introduction of PS Suite (see the section below) and Nokia adopted Windows Phone 7 as their new operating system.

And we got you covered on Android and iOS for all of your C# needs.

On the Browser: we worked with Google to bring you Mono to Native Client. In fact, every demo shown at the Google Native Client event on December 8th was powered by Mono.

On the Desktop: this year we added MacOS X as a first-class citizen in the world of supported Mono platforms. We did this by introducing MonoMac 1.0 and supporting Apple's MacStore with it.

Games: continue to take advantage of C# blend of performance and high-level features. Read more on my GDC 2011 post.

It is a wild new world for C# and .NET developers that were used to build their UI using ASP.NET or Winforms only. It has been fascinating to see developers evolve their thinking from a Microsoft-only view of the world to a world where they design libraries and applications that split the presentation layer from the business logic.

Developers that make this transition will be able to get great native experiences on each device and form factor.

Sony PSSuite - Powered by Mono

At GDC, Sony announced that PS Suite was built on top of Mono. PS Suite is a new development stack for cross-platform games and cross-platform applications to run on Android devices and Sony Vita.

The PS Suite presentation is available in this video.

In particular, watch the game in Video 2 to get a feeling for the speed of a 3D game purely written in managed code (no native code):

Some of the juicy details from the GDC announcement:

  • PS Suite will have an open appstore model, different than the traditional game publishing business.
  • Open SDK, available for everyone at launch time.
  • PS Suite supports both game development with Sony's 3D libraries as well as regular app development.
  • Cross-platform, cross-device, using the ECMA Common Intermediate Language.
  • Code in C#, run using Mono.
  • GUI Designer called "UI Composer" for non-game applications.
  • The IDE is based on MonoDevelop.
  • Windows-simulator is included to try things out quickly.

MonoDevelop on PSSuite:

PS Suite comes with a GUI Toolkit and this is what the UI composer looks like:

Google Native Client

Google Engineers ported Mono to run on the sandboxed environment of Native Client. Last year they had added support for Mono code generator to output code for Native Client using Mono's static compiler.

This year Google extended Native Client to support Just in Time Compilation, in particular, Mono's brand of JIT compilation. This was used by all three demos shown at the Google Native Client event a couple of days ago:

Unity Powered Builder

This is another game built with Unity's Native Client code generator:

To get the latest version of Mono with support for Native Client, download and build Mono from Google's branch on github.

Mono 2.10

This was the year of Mono 2.10. We went from a beta release for Mono 2.10 in January to making it our new stable release for Mono.

While the world is on Mono 2.10, we have started our work to get Mono 2.12 out in beta form in January.

Mono on Android

This year we launched Mono for Android, a product that consists of port of Mono to the Android OS, C# bindings to the native Java APIs and IDE support for both MonoDevelop and Visual Studio.

The first release came out in April, it was rough around the edges, but thanks to the amazing community of users that worked with us during the year, we solved the performance problems, the slow debugging, vastly improved the edit/debug/deploy cycle and managed to catch up to Google's latest APIs with the introduction of Mono for Android 4.0.

Mono on iOS

Just like Android, we have been on a roll with MonoTouch.

In short, this year:

  • We kept up with Apple's newly introduced APIs (UIKit, iCloud, Airplay, Bluetooth, Newstand, CoreImage).
  • Integrated XCode 4's UI designer with MonoDevelop< and added support for storyboards.
  • Added the option of using LLVM for our builds, bringing thumb support and ARMv7 support along the way.

We started beta-testing a whole new set of features to be released early next year: a new unit testing framework, a heap profiler, integrating MonoTouch.Dialog in the product and improving the debug/deploy process.<

Mono for iOS has been on the market now for two years, and many products are coming to the market based on it.

Phalanger

Phalanger is a PHP compiler that runs on the .NET and Mono VMs and is powered by the Dynamic Language Runtime.

It is so complete that it can run both MediaWiki and WordPress out of the box. And does so by running faster than they would under PHP.

This year the Phalanger guys released Phalanger 3.0 which now runs on Mono (previously they required the C++/CLI compiler to run).

Phalanger's performance is impressive as it is just as fast as the newly announced Facebook HipHop VM for PHP. The major difference being that Phalanger is a complete PHP implementation and the HipHopVM is still not a complete implementation.

The other benefit of Phalanger is that it is able to participate and interop with code written in other .NET languages as well as benefitting from the existing .NET interop story (C, C++).

CXXI

Our technology to bridge C# and C++ matured to the point that it can be used by regular users.

Compiler as a Service

This year our C# compiler was expanded in three directions:

  • We completed async/await support
  • We completed the two code output engines (System.Reflection.Emit and IKVM.Reflection).
  • We improved the compiler-as-a-service features of the compiler.

Our async/await support is scheduled to go out with the first preview of Mono 2.11 in early January. We can not wait to get this functionality to our users and start building a new generation of async-friendly/ready desktop, mobile and server apps.

One major difference between our compiler-as-a-service and Microsoft's version of the C# compiler as a service is that we support two code generation engines, one generates complete assemblies (like Microsoft does) and the other one is able to be integrated with running code (this is possible because we use System.Reflection.Emit and we can reference static or dynamic code from the running process).

We have also been improving the error recovery components of the compiler as this is going to power our new intellisense/code completion engine in MonoDevelop. Mono's C# compiler is the engine that is powering the upcoming NRefactory2 library.

You can read more about our compiler as a service updates.

Unity3D

Unity is one of Mono's major users. At this point Unity no longer requires an introduction, they went from independent game engine a few years ago to be one of the major game engine platforms in the game industry this year.

The Unity engine runs on every platform under the sun. From the Consoles (PS3, Wii and XBox360) to iPhones and Androids and runs on your desktop either with the Unity3D plugin or using Google's Native Client technology. The list of games being built with Unity keeps growing every day and they are consistently among the top sellers on every app store.

Mono is the engine that powers the scripts and custom code in games and applications built with Unity3D and it also powers the actual tool that users use to build games, the Unity3D editor:

The editor itself it implemented in terms of Unity primitives, and users can extend the Unity3D editor with C#, UnityScript or Boo scripts dynamically.

One of my favorite games built with Unity3D is Rochard was demoed earlier this year on a PS3 at the GDC and is now also avaialble on Steam:

Microsoft

Just before the end of the year, Microsoft shipped Kinectimals for iOS systems.

Kinectimals is built using Unity and this marks the first time that Microsoft ships a software product built with Mono.

Then again, this year has been an interesting year for Microsoft, as they have embraced open source technologies for Azure, released SDKs for iOS and Android at the same time they ship SDKs for their own platforms and shipped various applications on Apple's AppStore for iOS.

MonoDevelop

We started the year with MonoDevelop 2.4 and we finished after two major releases with MonoDevelop 2.8.5.

In the course of the year, we added:

  • Native Git support
  • Added .NET 4.0 project support, upgraded where possible to XBuild/MSBuild
  • MonoMac Projects
  • XCode 4 support for MonoMac, MonoTouch and Storyboards
  • Support for Android development
  • Support for iOS5 style properties
  • Major upgrade to the debugger engine
  • Adopted native dialogs on OSX and Windows

Our Git support was based on a machine assisted translation of the Java jGit library using Sharpen. Sharpen has proved to be an incredibly useful tool to bring Java code to the .NET world.

SGen

Our precise collector has gotten a full year of testing now. With Mono 2.10 we made it very easy for developers to try it out. All users had to do was run their programs with the --sgen flag, or set MONO_ENV_OPTIONS to gc=sgen.

Some of the new features in our new Garbage Collector include:

  • Windows, MacOS X and S390x ports of SGen (in addition to the existing x86, x86-64 and ARM ports).
  • Lock-free allocation to improve scalability (we only take locks when we run out of memory).
  • Work stealing parallel collector and a parallel nursery collector, to take advantage of extra CPUs on the system to help with the GC.
  • Work on performance and scalability work, as our users tried things out in the field, we identified hot-spots in SGen which we have been addressing.

As we are spending so much time on ARM-land these days, SGen has also gained various ARM-specific optimizations.

SGen was designed primarly to be used by Mono and we are extending it beyond being a pure garbage collector for Mono, to support scenarios where our garbage collector has to be integrated with other object systems and garbage collectors. This is the case of Mono for Android where we now have a cooperative garbage collector that works hand-in-hand with Dalvik's GC. And we also introduce support for toggle references to better support Objective-C environments like MonoTouch and MonoMac.

XNA and Mono: MonoGame

Ever since Microsoft published the XNA APIs for .NET, developers have been interested in bringing XNA to Mono-based platforms.

There was a MonoXNA project, which was later reused by projects like SilverXNA (an XNA implementation for Silverlight) and later XNAtouch an implementation of XNA for the iPhone powered by MonoTouch. Both very narrow projects focused on single platforms.

This year, the community got together and turned the single platform XNATouch into a full cross-platform framework, the result is the MonoGame project:

Platform Support Matrix

Currently MonoGame's strength is on building 2D games. They already have an extensive list of games that have been published on the iOS AppStore and the Mac AppStore and they were recently featured in Channel 9's Coding For Fun: MonoGame Write Once Play Everywhere.

An early version of MonoGame/XnaTouch powers SuperGiantGame's Bastion game on Google's Native Client. Which allows users of Windows, Mac and Linux desktop systems to run the same executable on all systems. If you are running Chrome, you can install it in seconds.

Incidentally, Bastion just won three awards at the Spike VGA awards including Best Downloadable Game, Best Indie Game, and Best Original Score.

The MonoGame team had been relatively quiet for the most part of 2011 as they were building their platform, but they got into a good release cadence with the MonoGame 2.0 release in October, when they launched as a cross-platform engine, followed up with a tasty 2.1 release only two weeks ago.

With the addition of OpenGL ES 2.0 support and 3D capabilities to MonoGame, 2012 looks like it will be a great year for the project.

Gtk+

Since MonoDevelop is built on top of the Gtk+ toolkit and since it was primarily a Unix toolkit there have been a few rough areas for our users in both Mac and Windows.

This year we started working with the amazing team at Lanedo to improve Gtk+ 2.x to work better on Mac and Windows.

The results are looking great and we want to encourage developers to try out our new Beta version of Mono, which features the updated Gtk+ stack.

This new Gtk+ stack solves many of the problems that our users have reported over the past few months.

Hosting Bills

I never tracked Mono downloads as I always felt that tracking download numbers for open source code that got repackaged and redistributed elsewhere pointless.

This summer we moved the code hosting from Novell to Xamarin and we were surprised by our hosting bills.

The major dominating force are binaries for Windows and MacOS which are communities that tend not to download source and package the software themselves. This is the breakdown for completed downloads (not partial downloads, or interrupted ones) for our first month of hosting of Mono:

  • 39,646 - Mono for Windows (Runtime + SDK)
  • 27,491 - Mono for Mac (Runtime)
  • 9,803 - Mono for Windows (Runtime)
  • 9,910 - Mono for Mac (Runtime + SDK)

  • Total: 86,850 downloads for Windows and Mac

These numbers are only for the Mono runtime, not MonoDevelop, the MonoDevelop add-ins or any other third party software.

It is also worth pointing out that none of our Windows products (MonoDevelop for Windows, or Mono for Android on Windows) use the Mono runtime. So these downloads are for people doing some sort of embedding of Mono on their applications on Windows.

At this point, we got curious. We ran a survey for two days and collected 3,949 answers. These is the summary of the answers:

What type of application will you run with Mono?

This one was fascinating, many new users to the .NET world:

The best results came form the free-form answers in the form. I am still trying to figure out how to summarize these answers, they are all very interesting, but they are also all over the map.

Some Key Quotes

When I asked last week for stories of how you used Mono in 2011, some of you posted on the thread, and some of you emailed me.

Here are a couple of quotes from Mono users:

I can't do without Mono and I don't just mean the iOS or Android dev with C# but MonoMac and Mono for *nix too. Thanks for everything; from the extraordinary tools to making hell turn into heaven, and thank you for making what used to be a predicament to effortless development pleasure.

I don't think we could have achieved our solutions without Mono in enterprise mobile development. It addresses so many key points, it is almost a trade secret. We extensively use AIR and JavaScript mobile frameworks too but ultimately we desperately need 1-to-1 mapping of the Cocoa Touch APIs or tap into low level features which determines our choice of development platform and frameworks.

That's where Mono comes in.

Gratefulness and paying polite respects aside, the key tenets of Mono we use are:

  • shared C# code base for all our enterprise solutions - achieving the write once, compile everywhere promise with modern language and VM features everyone demands and expects in this century
  • logical, consistent and self-explanatory wrapper APIs for native services - allows us to write meta APIs of our own across platforms
  • low latency, low overhead framework
  • professional grade IDE and tools
  • native integration with iOS tools and development workflow
  • existence of satisfactory documentation and support
  • legal clarity - favorable licensing options
  • dedicated product vision via Xamarin - commercial backing
  • community support

Koen Pijnenburg shared this story with me:

We've been in touch a few times before and would like to contribute my story. It's not really an interesting setup, but a real nice development for Mono(Touch). I've been developing app for iPhone since day 1, I was accepted in the early beta for the App Store. On launch day july 2008, 2 of the 500 apps in the App Store were mine, my share has decreased a lot in the past years ;)

I really, really, really like football(soccer), maybe you do also, I don't know. In september 2008 I created the first real soccer/football stats app for the iPhone called My Football. This was a huge succes, basically no competition at that time. In 2009 I released My Football Pro, an app with 800 leagues worldwide, including live data for more then 100 leagues. Since then I created lots of apps, all created with the iPhone SDK and with Objective-C.

Since the launch of MonoTouch, it merged the best of two worlds in my opinion. I've been a Mono/.NET developer for years before the iPhone apps, for me it was love at first line of code.

The last year I've increased my work with MonoTouch / Droid /MonoGame(Poppin' Frenzy etc ;)), and focused less on working with native SDK's only. Since our My Football apps are at the end of their lifecycle in this form, we are working on a new line of My Football apps. Our base framework supporting our data, is built with Mono, and the apps UI will be built with MonoTouch / MonoDroid / WP7 etc.

Included is the screenshot of our first app built with the framework, My Football Pro for iPad. It has a huge amount of data, stats / tables / matches / live data for more then 800 leagues worldwide. We think it's a great looking app!

Working with MonoTouch is fantastic and just wanted you to know this!

Mono on Mainframes

This year turned out to show a nice growh in the deployment of Mono for IBM zSeries computers.

Some are using ASP.NET, some are using Mono in headless mode. This was something that we were advocating a few years ago, and this year the deployments went live both in Brazil and Europe.

Neale Ferguson from Sinenomine has kept the zSeries port active and in shape.

Mono and ASP.NET

This year we delivered enough of ASP.NET 4.0 to run Microsoft's ASP.NET MVC 3.

Microsoft ASP.NET MVC 3 is a strange beast. It is licensed under a great open source license (MS-PL) but the distribution includes a number of binary blobs (the Razor engine).

I am inclined to think that the binaries are not under the MS-PL, but strictly speaking, since the binaries are part of the MS-PL distribution labeled as such, the entire download is MS-PL.

That being said, we played it safe in Mono-land and we did not bundle ASP.NET MVC3 with Mono. Instead, we provide instructions on how users can deploy ASP.NET MVC 3 applications using Razor as well as pure Razor apps (those with .cshtml extensions) with Mono.

2012, the year of Mono 2.12

2012 will be a year dominated by our upcoming Mono release: Mono 2.12. It packs a year worth of improvements to the runtime, to our build process and to the API profiles.

Mono 2.12 defaults to the .NET 4.x APIs and include support for .NET 4.5.

This is going to be the last time that we branch Mono for these extended periods of time. We are changing our development process and release policies to reduce the amount of code that is waiting on a warehouse to be rolled out to developers.

ECMA

We wrapped up our work on updating the ECMA CLI standard this year. The resulting standard is now at ISO and going through the standard motions to become an official ISO standard.

The committee is getting ready for a juicy year ahead of us where we are shifting gears from polish/details to take on significant extensions to the spec.

by Miguel de Icaza (miguel@gnome.org) at December 22, 2011 12:47 AM

December 19, 2011

Miguel de Icaza

CXXI: Bridging the C++ and C# worlds.

The Mono runtime engine has many language interoperability features but has never had a strong story to interop with C++.

Thanks to the work of Alex Corrado, Andreia Gaita and Zoltan Varga, this is about to change.

The short story is that the new CXXI technology allows C#/.NET developers to:

  • Easily consume existing C++ classes from C# or any other .NET language
  • Instantiate C++ objects from C#
  • Invoke C++ methods in C++ classes from C# code
  • Invoke C++ inline methods from C# code (provided your library is compiled with -fkeep-inline-functions or that you provide a surrogate library)
  • Subclass C++ classes from C#
  • Override C++ methods with C# methods
  • Expose instances of C++ classes or mixed C++/C# classes to both C# code and C++ as if they were native code.

CXXI is the result of two summers of work from Google's Summer of Code towards improving the interoperability of Mono with the C++ language.

The Alternatives

This section is merely a refresher of of the underlying technologies for interoperability supported by Mono and how developers coped with C++ and C# interoperability in the past. You can skip it if you want to get to how to get started with CXXI.

As a reminder, Mono provides a number of interoperability bridges, mostly inherited from the ECMA standard. These bridges include:

  • The bi-directional "Platform Invoke" technology (P/Invoke) which allows managed code (C#) to call methods in native libraries as well as support for native libraries to call back into managed code.
  • COM Interop which allows Mono code to transparently call C or C++ code defined in native libraries as long as the code in the native libraries follows a few COM conventions [1].
  • A general interceptor technology that can be used to intercept method invocations on objects.

When it came to getting C# to consume C++ objects the choices were far from great. For example, consider a sample C++ class that you wanted to consume from C#:

class MessageLogger {
public:
	MessageLogger (const char *domain);
	void LogMessage (const char *msg);
}

One option to expose the above to C# would be to wrap the Demo class in a COM object. This might work for some high-level objects, but it is a fairly repetitive exercise and also one that is devoid of any fun. You can see how this looks like in the COM Interop page.

The other option was to produce a C file that was C callable, and invoke those C methods. For the above constructor and method you would end up with something like this in C:

/* bridge.cpp, compile into bridge.so */
MessageLogger *Construct_MessageLogger (const char *msg)
{
	return new MessageLogger (msg);
}

void LogMessage (MessageLogger *logger, const char *msg)
{
	logger->LogMessage (msg);
}

And your C# bridge, like this:

class MessageLogger {
	IntPtr handle;

	[DllImport ("bridge")]
	extern static IntPtr Construct_MessageLogger (string msg);

	public MessageLogger (string msg)
	{
		handle = Construct_MessageLogger (msg);
	}

	[DllImport ("bridge")]
	extern static void LogMessage (IntPtr handle, string msg);

	public void LogMessage (string msg)
	{
		LogMessage (handle, msg);
	}
}

This gets tedious very quickly.

Our PhyreEngine# binding was a C# binding to Sony's PhyreEngine C++ API. The code got very tedious, so we built a poor man's code generator for it.

To make things worse, the above does not even support overriding C++ classes with C# methods. Doing so would require a whole load of manual code, special cases and callbacks. The code quickly becomes very hard to maintain (as we found out ourselves with PhyreEngine).

This is what drove the motivation to build CXXI.

[1] The conventions that allow Mono to call unmanaged code via its COM interface are simple: a standard vtable layout, the implementation of the Add, Release and QueryInterface methods and using a well defined set of types that are marshaled between managed code and the COM world.

How CXXI Works

Accessing C++ methods poses several challenges. Here is a summary of the components that play a major role in CXXI:

  • Object Layout: this is the binary layout of the object in memory. This will vary from platform to platform.
  • VTable Layout: this is the binary layout that the C++ compiler will use for a given class based on the base classes and their virtual methods.
  • Mangled names: non-virtual methods do not enter an object vtable, instead these methods are merely turned into regular C functions. The name of the C functions is computed based on the return type and the parameter types of the method. These vary from compiler to compiler.

For example, given this C++ class definition, with its corresponding implementation:

class Widget {
public:
	void SetVisible (bool visible);
	virtual void Layout ();
	virtual void Draw ();
};

class Label : public Widget {
public:
	void SetText (const char *text);
	const char *GetText ();
};

The C++ compiler on my system will generate the following mangled names for the SetVisble, Layout, Draw, SetText and GetText methods:

__ZN6Widget10SetVisibleEb
__ZN6Widget6LayoutEv
__ZN6Widget4DrawEv
__ZN5Label7SetTextEPKc
__ZN5Label7GetTextEv

The following C++ code:

	Label *l = new Label ();
	l->SetText ("foo");
	l->Draw ();	

Is roughly compiled into this (rendered as C code):

	Label *l = (Label *) malloc (sizeof (Label));
	ZN5LabelC1Ev (l);   // Mangled name for the Label's constructor
	_ZN5Label7SetTextEPKc (l, "foo");

	// This one calls draw
	(l->vtable [METHOD_PTR_SIZE*2])();

For CXXI to support these scenarios, it needs to know the exact layout for the vtable, to know where each method lives and it needs to know how to access a given method based on their mangled name.

The following chart explains shows how a native C++ library is exposed to C# or other .NET languages:

Your C++ source code is compiled twice. Once with the native C++ compiler to generate your native library, and once with the CXXI toolchain.

Technically, CXXI only needs the header files for your C++ project, and only the header files for the APIs that you are interested in wrapping. This means that you can create bindings for C++ libraries that you do not have the source code to, but have its header files.

The CXXI toolchain produces a .NET library that you can consume from C# or other .NET languages. This library exposes a C# class that has the following properties:

  • When you instantiate the C# class, it actually instantiates the underlying C++ class.
  • The resulting class can be used as the base class for other C# classes. Any methods flagged as virtual can be overwritten from C#.
  • Supports C++ multiple inheritance: The generated C# classes expose a number of cast operators that you can use to access the different C++ base classes.
  • Overwritten methods can call use the "base" C# keyword to invoke the base class implementation of the given method in C++.
  • You can override any of the virtual methods from classes that support multiple inheritance.
  • A convenience constructor is also provided if you want to instantiate a C# peer for an existing C++ instance that you surfaced through some other mean.

This is pure gold.

The CXXI pipeline in turn is made up of three components, as shown in the diagram on the right.

The GCC-XML compiler is used to parse your source code and extract the vtable layout information. The generated XML information is then processed by the CXXI tooling to generate a set of partial C# classes that contain the bridge code to integrate with C++.

This is then combined with any customization code that you might want to add (for example, you can add some overloads to improve the API, add a ToString() implementation, add some async front-ends or dynamic helper methods).

The result is the managed assembly that interfaces with the native static library.

It is important to note that the resulting assembly (Foo.dll) does not encode the actual in-memory layout of the fields in an object. Instead, the CXXI binder determines based on the ABI being used what the layout rules for the object are. This means that the Foo.dll is compiled only once and could be used across multiple platforms that have different rules for laying out the fields in memory.

Demos

The code on GitHub contains various test cases as well as a couple of examples. One of the samples is a minimal binding to the Qt stack.

Future Work

CXXI is not finished, but it is a strong foundation to drastically improve the interoperability between .NET managed languages and C++.

Currently CXXI achieves all of its work at runtime by using System.Reflection.Emit to generate the bridges on demand. This is useful as it can dynamically detect the ABI used by a C++ compiler.

One of the projects that we are interested in doing is to add support for static compilation, this would allow PS3 and iPhone users to use this technology. It would mean that the resulting library would be tied to the platform on which the CXXI tooling was used.

CXXI currently implements support for the GCC ABI, and has some early support for the MSVC ABI. Support for other compiler ABIs as well as for completing the MSVC ABI is something that we would like help with.

Currently CXXI only supports deleting objects that were instantiated from managed code. Other objects are assumed to be owned by the unmanaged world. Support for the delete operator is something that would be useful.

We also want to better document the pipeline, the runtime APIs and improve the binding.

by Miguel de Icaza (miguel@gnome.org) at December 19, 2011 06:28 PM

December 17, 2011

Jeff Hardy's Blog (NWSGI)

Setting environment variables for MSBuild Exec tasks

MSBuild has an <Exec> task for calling external programs, but (bafflingly) it doesn’t allow you to set the environment the program runs in. In my case, I need to run a Python script with certain directories in the PYTHONPATH.

The Short Way

On Unix machines, this is trivial:

PYTHONPATH=”~/foo” python script.py

For Windows’ wonderful cmd.exe shell (which MSBuild uses to run Exec) it’s a little longer:

(set PYTHONPATH=C:\Foo) & python script.py

If you want, you can chain multiple set commands together to set multiple variables:

(set PYTHONPATH=C:\Foo) & (set FOO=42) & python script.py

To actually use this in the MSBuild file, you’ll need to escape it like so:

<Exec Command=”(set PYTHONPATH=C:\Foo) &amp; python script.py” />

Getting the quoting right for <Exec> can be tricky; I use the <Message> task for debugging the command line. Remember to use &quot; instead of double quotes.

The Long Way

This method takes more typing but is a bit more clear, especially if you have multiple variables to set. Actually, it can be used to store whole batch files inside the MSBuild file, if necessary.

<PropertyGroup>
  <PythonExec><![CDATA[
set PYTHONPATH=C:\Foo
set FOO=42
python script.py
  ]]></PythonExec>
</PropertyGroup>

<Exec Command="$(PythonExec)" />

A CDATA section is required because the newlines need to be preserved. When running an <Exec> task, all MSBuild does is write the contents of Command to a temporary batch file an execute. This just provides more than the usual single line command.

by jdhardy (noreply@blogger.com) at December 17, 2011 08:33 PM

December 16, 2011

Miguel de Icaza

2011: Tell me how you used Mono this year

I have written a summary of Mono's progress in the year 2011, but I want to complement my post with stories from the community.

Did you use Mono in an interesting setup during 2011? Please post a comment on this post, or email me the story and tell me a little bit about it.

by Miguel de Icaza (miguel@gnome.org) at December 16, 2011 05:55 AM

December 14, 2011

Miguel de Icaza

Porto Alegre

We are traveling to Porto Alegre in Brazil today and will be staying in Brazil until January 4th.

Ping me by email (miguel at gnome dot org) if you would like to meet in Porto Alegre to talk hacking, Mono, Linux, open source, iPhone or if you want to enlighten me about the role of scrum masters as actors of social change.

Happy holidays!

by Miguel de Icaza (miguel@gnome.org) at December 14, 2011 07:30 PM

December 13, 2011

Jeff Hardy's Blog (NWSGI)

IronPython 2011 Survey

The IronPython team would like to know more about how IronPython is being used and what improvements people would like to see in 2012.

Take the IronPython 2011 survey!

by jdhardy (noreply@blogger.com) at December 13, 2011 03:55 AM

November 30, 2011

Miguel de Icaza

Farewell to Google's CodeSearch

It seems that part of Steve Jobs' legacy was to give Larry Page some advise: focus. This according to Steve Jobs' recently published biography.

So Larry Page took the advise seriously and decided to focus. His brand of focus is to kill projects that were distracting to their goals. One of them, -and the one I cared the most about- was CodeSearch..

What did CodeSearch do for programmers?

The CodeSearch service was a unique tool as it indexed open source code in the wild.

Codesearch is one of the most valuable tools in existence for all software developers, specifically:

  • When an API is poorly documented, you could find sample bits of code that used the API.
  • When an API error codes was poorly documented, you could find sample bits of code that handled it.
  • When an API was difficult to use (and the world is packed with those), you could find sample bits of code that used it.
  • When you quickly wanted to learn a language, you knew you could find quality code with simple searches.
  • When you wanted to find different solutions to everyday problems dealing with protocols, new specifications, evolving standards and trends. You could turn to CodeSearch.
  • When you were faced with an obscure error message, an obscure token, an obscure return value or other forms of poor coding, you would find sample bits of code that solved this problem.
  • When dealing with proprietary protocols or just poorly documented protocols, you could find how they worked in minutes.
  • When you were trying to debug yet another broken standard or yet another poorly specified standard, you knew you could turn quickly to CodeSearch to find the answers to your problems (memories of OAuth and IMAP flash in my head).
  • When learning a new programming language or trying to improve your skills on a new programming language, you could use CodeSearch to learn the idioms and the best (and worst practices).
  • When building a new version of a library, either in a new language, making a fluent version, making an open source version, building a more complete version you would just go to Codesearch to find answers to how other people did things.

It is a shame that Google is turning their back on their officially stated mission "To organize the world‘s information and make it universally accessible and useful". It is a shame that this noble goal is not as important as competing with Apple, Facebook, Microsoft, Twitter and Yelp.

Comparing Search Engines

While writing this blog entry, I fondly remembered how Codesearch helped me understand the horrible Security framework that ships with iOS. Nobody informed the Apple engineers that "Security through obscurity" was not intended for their developer documentation.

In this particular case, I was trying to understand the semantics of kSecReturnData. How to use this constant and how it interacts with the keyring system is both tricky, and poorly specified in Apple's docs. Sometimes things fail without any indication of what went wrong, other than "error". So I used CodeSearch to figure this out (along with some other 30 constants and APIs in that library that are just as poorly documented).

These are the results of looking for this value in three search engines as of this morning.

First Contender: GrepCode

GrepCode shows absolutely nothing relevant. But shows a bunch of Java packages with no context, no code snippets and if you make the mistake of drilling down, you wont find anything:

Not useful.

Second Contender: Codease

Codase is indexing 250 million lines of code, usually it takes minutes to get this page:

Maybe the server will come back up.

Third Contender: Koders

Koders is part of Black Duck, and searching for the term renders a bunch of matches. Not a single one of the results displayed actually contain a single use of the kSecReturnData constant. And not a single one of the snippets actually show the kSecReturnData constant. It is as useful as configuring your browser to use StumbleUpon as your search engine:

Not useful.

Google's CodeSearch

And this is what Codesearch shows:

The big innovation on Google's search engine is that it actually works and shows real matches for the text being searched, with a relevant snippet of the information you are looking for.

We are going to be entering the dark ages of software research in the next few months.

Is there a hacker White Knight out there?

Running a service like Codesearch is going to take a tremendous amount of resources. There are major engineering challenges involved and hosting a service like this can not be cheap. It is probably not even profitable.

Larry Page's Google has already dropped the project. We can only hope that in a few years Sergey Brin's Google or Eric Schmidt's Google will bring this service back.

Microsoft is too busy catching up to Google and wont have any spare resources to provide a Bing for code search. And if they did, they would limit the search to Win32 APIs.

Thanks!

I should thank Google for funding that project for as long as they did as well as the Google engineers that worked on it as long as they could. Over the years, it helped me fix problems in a fraction of the time and helped me understand complicated problems in minutes.

The Google engineers whose projects just got shutdown for in the name of strategy and focus are probably as sad as all of us are.

On the plus side, I get to share this rant on Google Plus with a dozen of my friends!

by Miguel de Icaza (miguel@gnome.org) at November 30, 2011 09:44 AM

November 22, 2011

Miguel de Icaza

Updated Documentation Site

Jeremie Laval has upgraded our Web-based documentation engine over at docs.go-mono.com. This upgrade brings a few features:

New Look: Base on Jonathan Pobst's redesign, this is what our documentation looks like now:

Better Links: Links to pages on the site will now properly open the left-side tree to the documentation you linked to. This has been an open request for about six years, and it got finally implemented.

Search: the search box on the web site uses Lucene to search the text on the server side, and shows you the matching results as you type:

Easier to Plug: MonoDoc/Web now easily supports loading documentation from alternate directories, it is no longer limited to loading the system-configured documentation.

No more frames: For years we used frames for the documentation pages. They had a poor experience and made the code uglier. They are now gone.

Powered by Mono's SGen: We have reduced the memory consumption of our web documentation by switching to Mono's Generational GC from Boehm's. The load on the server is lower, responses are faster and we scale better.

The source code changes are now on GitHub in the webdoc module.

We have also added Google Analytics support to our web site to help us determine which bits of documentation are more useful to you.

by Miguel de Icaza (miguel@gnome.org) at November 22, 2011 08:34 PM

November 09, 2011

Mike Stall

Pyvot for Excel

I’m thrilled to see the availability of Pyvot, a python package for manipulating tabular data in excel. This is part of the Python Tools for Visual Studio (PTVS) ecosystem.

Check out the codeplex site at http://pytools.codeplex.com/wikipage?title=Pyvot or the tutorial on python.org.

Excel does expose an object model through COM, but it’s tricky to use.  Pyvot provides a very simple python programming experience that focuses on your data instead of Excel COM object trivia. Here are some of my favorite examples:

  • Easy to send python data into excel, manipulate it in excel, and then send it back to python.
  • if you ask for a column in Excel’s object model, it will give you back the entire Excel column, including the one million empty cells. Wheras Pyvot will just give you back the data you used.
  • Pyvot will recognize column header names from tables.
  • Pyvot makes it easy to compute new columns and add them to your table.
  • Pyvot makes it easy to connect to an existing excel workbook, even if the workbook has not even been saved to a file. (This involved scanning down the running object table, and doing smart name matching). This allows you to use excel as a scratchpad for python.
  • Pyvot works naturally with Excel’s existing auto-filters. This enables a great scenario where you can start with data in python, send it to excel and manipulate it with excel auto filters (sort it, remove bad values, etc), and then pull the cleaned data back into python.

Some other FAQs:

  1. What can’t Pyvot do? Pyvot is really focused on tabular data. Excel becomes a Datatable viewer for Python. However Pyvot is not intended to be a full excel automation solution.
  2. How does Pyvot compare to VBA? a) Pyvot is just Python and so you can use vast existing Python libraries. b) Also, VBA is embedded in a single excel workbook and is hard to share across workbooks. Pyvot is about real Python files that live outside of the workbook and can be shared and managed under source control.  c) VBA uses the excel object model, whereas Pyvot provides a much simpler experience for tabular data.
  3. How does Pyvot compare to an Excel-addin? a) Pyvot runs entirely out-of-process, so you don’t need to worry about it crashing Excel on you.  b) Excel-addins, like VBA, use the excel object model. c) Excel addins need to be installed. Pyvot is just loose python files that don’t interfere with your excel installation.

Anyway, if you need to excel goodness, especially filters, check out Pyvot and PTVS.

by Mike Stall - MSFT at November 09, 2011 05:57 PM

October 18, 2011

Miguel de Icaza

Hiring Mono Runtime Hackers

As Mono grows on servers, mobile and desktop platforms, we are looking to hire programmers to join our Mono Runtime team.

The Mono Runtime team owns the code generator, the just-in-time and ahead-of-time compilers, the garbage collector, the threadpool and async layers in the runtime and mostly works in the C-side of the house.

If you are a developer with low-level experience with virtual machines, just in time compilers or love garbage collection, real time processing, or you read every new research paper on incremental garbage collection, hardware acceleration, register allocation and you are interested in joining our young, self-funded and profitable startup, we want to hear from you.

Send your resumes to jobs@xamarin.com

by Miguel de Icaza (miguel@gnome.org) at October 18, 2011 08:25 PM

October 14, 2011

Miguel de Icaza

Upcoming Mono Releases: Change in Policies

We have historically made stable releases of Mono that get branched and maintained for long periods of time. During these long periods of time, we evolve our master release for some four to five months while we do major work on the branch.

Historically, we have had done some of these large changes since we have rewritten or re-architected large parts of our JIT, or our garbage collector, or our compilers.

There were points in the project history where it took us some 9 months to release: seven months of new development followed by two months of beta testing and fixing regressions. With Mono 2.6 we tried to change this, we tried to close the release time to at most six months, and we were relatively good at doing this with 2.8 and 2.10.

We were on track to do a quick Mono 2.12 release roughly around May, but with the April bump in the road, this derailed our plans.

Since 2.10.0 was released two things happened:

  • On Master: plenty of feature work and bug fixing.
  • On our 2.10 branch: bug fixes and backporting fixes from master to 2.10

Now that things have settled at Xamarin and that we are getting Mono back into continuous integration builds we are going to release our first public beta of the upcoming Mono, it will be called Mono 2.11.1. We will keep it under QA until we are happy with the results and we will then release Mono 2.12 based on this.

But after Mono 2.12, we want to move to a new development model where we keep our master branch always in a very stable state. This means that new experimental features will be developed in branches and only landed to the master branch once they have been completed.

Our goal is to more quickly bring the features that we are developing to our users instead of having everyone wait for very long periods of time to get their new features.

New Features in Mono 2.11

These are some of the new features availalable in Mono 2.11:

  • We refactored our C# compiler to have two backends one based on Cecil, one based on Reflection.Emit. Fixing some important usability properties of our compiler.
  • Implemented C# 5 Async.
  • Our C# compiler has TypedReference support (__refvalue, __reftype and __makeref).
  • Our compiler as a service can compile classes now and has an instance API (instantiate multiple C# compiler contexts independently).
  • Added the .NET 4.5 API profile and many of the new async APIs to use with C# 5.
  • Improved our new Garbage Collector: it is faster, it is more responsive and it is more stable. It has also gained MacOS/iOS native support.
  • We made System.Json available on every profile.
  • We added Portable Class Library support.
  • We added tooling for Code Contracts
  • We added a TPL Dataflow implementation
  • We added fast ThreadLocal support
  • We brought our ASP.NET implementation to the year 2011 and it now sports a new enormously cute error page as opposed to that error page that we have which transports you mind back to 1999.
  • Mono's debugger now supports attaching to a live process (deferred support)
  • Our socket stack is faster on BSD and OSX, by using kqueue (on Linux it uses epoll already).

by Miguel de Icaza (miguel@gnome.org) at October 14, 2011 08:31 PM

October 10, 2011

The Voidspace Techie Blog

mock 0.8 beta 4 released: bugfix and minor features

I've released mock 0.8 beta 4. You can download it it or install it with: pip install -U mock==dev mock is a library for testing in Python. ... [602 words]

October 10, 2011 01:05 AM

September 27, 2011

Miguel de Icaza

WinRT and Mono

Today Joseph mentioned to me that some of our users got the impression from my previous post on WinRT that we would be implementing WinRT for Linux. We are not working on a WinRT UI stack for Linux, and do not have plans to.

WinRT is a fabulous opportunity for Mono, because Microsoft is sending a strong message: if you want your code to run in multiple scenarios (server, desktops, sandboxed environments), you want to split your UI code from your backend code.

This is great because it encourages developers to think in terms of having multiple facades for the same code base and the direction that we have been taking Mono on in the last few years.

Use the native toolkit on each platform to produce an immersive user experience, and one that leverages the native platform in the best possible way.

These are the APIs that we envision .NET developers using on each platform:

  • Windows: WinRT, Winforms, WPF (fallbacks: Gtk#, Silverlight)
  • MacOS: MonoMac (fallback: Gtk#, Silverlight)
  • Linux: Gtk#
  • Android: MonoDroid APIs
  • iOS: MonoTouch
  • Windows Phone 7: Silverlight
  • XBox360: XNA-based UI

Even if a lot of code could be reused from Moonlight, WinRT is a moving target. It is not clear that the Linux desktop, as we know it today, is keeping up with the growth of other consumer environments. I talked to Tim about this at Build.

Head-less WinRT

There are some GUI-less components of WinRT that *do* make sense to bring to Mono platforms. There is already an implementation of some bits of the headless WinRT components being done by Eric.

The above effort will enable more code sharing to take place between regular .NET 4 apps, WP7 apps, Mono apps and WinRT apps.

by Miguel de Icaza (miguel@gnome.org) at September 27, 2011 06:04 AM

September 20, 2011

Mike Stall

Python Tools for VS

I’ve been having a great time using Python Tools for VS.  It’s a free download that provides CPython language support in Visual Studio 2010. The intellisense is pretty good (especially for a dynamic language!) and the debugger is useful to have. Having a good IDE is changing the way I view the language. Check out the homepage for a long list of features it supports. One other perk is that because it’s using the VS 2010 shell, it works with my favorite VS 2010 editor extensions.

by Mike Stall - MSFT at September 20, 2011 06:16 PM

September 16, 2011

Miguel de Icaza

WinRT demystified

Windows 8 as introduced at Build is an exciting release as it has important updates to how Microsoft envisions users will interact with their computers, to a fresh new user interface to a new programming model and a lot more.

If you build software for end-users, you should watch Jensen Harris discuss the Metro principles in Windows 8. I find myself wanting to spend time using Windows 8.

But the purpose of this post is to share what I learned at the conference specifically about WinRT and .NET.

The Basics

Microsoft is using the launch of Windows 8 as an opportunity to fix long-standing problems with Windows, bring a new user interface, and enable a safe AppStore model for Windows.

To do this, they have created a third implementation of the XAML-based UI system. Unlike WPF which was exposed only to the .NET world and Silverlight which was only exposed to the browser, this new implementation is available to C++ developers, HTML/Javascript developers and also .NET developers.

.NET developers are very familiar with P/Invoke and COM Interop. Those are two technologies that allow a .NET developer to consume an external component, for example, this is how you would use the libc "system (const char *)" API from C#:

	[DllImport ("libc")]
	void system (string command);
	[...]

	system ("ls -l /");
	

We have used P/Invoke extensively in the Mono world to create bindings to native libraries. Gtk# binds the Gtk+ API, MonoMac binds the Cocoa API, Qyoto binds the Qt API and hundred other bindings wrap other libraries that are exposed to C# as object-oriented libraries.

COM Interop allows using C or C++ APIs directly from C# by importing the COM type libraries and having the runtime provide the necessary glue. This is how Mono talked with OpenOffice (which is based on COM), or how Mono talks to VirtualBox (which has an XPCOM based API).

There are many ways of creating bindings for a native library, but doing it by hand is bound to be both tedious and error prone. So everyone has adopted some form of "contract" that states what the API is, and the binding author uses this contract to create their language binding.

WinRT

WinRT is a new set of APIs that have the following properties:

  • It implements the new Metro look.
  • Has a simple UI programming model for Windows developers (You do not need to learn Win32, what an HDC, WndProc or LPARAM is).
  • It exposes the WPF/Silverlight XAML UI model to developers.
  • The APIs are all designed to be asynchronous.
  • It is a sandboxed API, designed for creating self-contained, AppStore-ready applications. You wont get everything you want to create for example Backup Software or Hard Disk Partitioning software.
  • The API definitions is exposed in the ECMA 335 metadata format (the same one that .NET uses, you can find those as ".winmd" files).

WinRT wraps both the new UI system as well as old Win32 APIs and it happens that this implementation is based on top of COM.

WinRT Projections

What we call "bindings" Microsoft now calls "projections". Projections are the process of exposing APIs to three environments: Native (C and C++), HTML/Javascript and .NET.

  • If you author a component in C++ or a .NET language, its API will be stored in a WinMD file and you will be able to consume it from all three environments (Native, JavaScript and .NET).

    Even in C++ you are not exposed to COM. The use of COM is hidden behind the C++ projection tools. You use what looks and feels like a C++ object oriented API.

    To support the various constructs of WinRT, the underlying platform defines a basic set of types and their mappings to various environment. In particular, collection objects in WinRT are mapped to constructs that are native to each environment.

    Asynchronous APIs

    Microsoft feels that when a developer is given the choice of a synchronous and an asynchronous API, developers will choose the simplicity of a synchronous API. The result usually works fine on the developer system, but is terrible when used in the wild.

    With WinRT, Microsoft has followed a simple rule: if an API is expected to take more than 50 milliseconds to run, the API is asynchronous.

    The idea of course is to ensure that every Metro application is designed to always respond to user input and to not hang, block or provide a poor user experience.

    Async programming has historically been a cumbersome process as callbacks and state must be cascaded over dozens of places and error handling (usually poor error handling) is sprinkled across multiple layers of code.

    To simplify this process, C# and VB have been extended to support the F#-inspired await/async pattern, turning async programming into a joy. C++ got a setup that is as good as you can get with C++ lambdas and Javascript uses promises and "then ()".

    Is it .NET or Not?

    Some developers are confused as to whether .NET is there or not in the first place, as not all of the .NET APIs are present (File I/O, Sockets), many were moved and others were introduced to integrate with WinRT.

    When you use C# and VB, you are using the full .NET framework. But they have chosen to expose a smaller subset of the API to developers to push the new vision for Windows 8.

    And this new vision includes safety/sandboxed systems and asynchronous programming. This is why you do not get direct file system access or socket access and why synchronous APIs that you were used to consuming are not exposed.

    Now, you notice that I said "exposed" and not "gone".

    What they did was that they only exposed to the compiler a set of APIs when you target the Metro profile. So your application will not accidentally call File.Create for example. At runtime though, the CLR will load the full class library, the very one that contains File.Create, so internally, the CLR could call something like File.Create, it is just you that will have no access to it.

    This split is similar to what has been done in the past with Silverlight, where not every API was exposed, and where mscorlib was given rights that your application did not have to ensure the system safety.

    You might be thinking that you can use some trick (referencing the GAC library instead of the compiler reference or using reflection to get to private APIs, or P/Invoking into Win32). But all of those uses will be caught by AppStore review application and you wont be able to publish your app through Microsoft's store.

    You can still do whatever ugly hack you please on your system. It just wont be possible to publish that through the AppStore.

    Finally, the .NET team has taken this opportunity to do some spring cleaning. mscorlib.dll and System.dll have been split in various libraries and they have moved some types around.

    Creating WinRT Components

    Microsoft demoed creating new WinRT components on both C++ and .NET.

    In the .NET case, creating a WinRT component has been drastically simplified. The following is the full source code for a component that adds 2:

    
    	public sealed class AddTwo {
    		public int Add (int a, int b)
    		{
    			return a + b;
    		}
    
    		public async IAsyncOperation SubAsync (int a, int b)
    		{
    			return a - await (CountEveryBitByHand (b));
    		}
    	}
    	

    You will notice that there are no COM declarations of any kind. The only restriction is that your class must be sealed (unless you are creating a XAML UI component, in that case the restriction is lifted).

    There are also some limitations, you can not have private fields on structures, and there is not Task<T> for asynchronous APIs, instead you use the IAsyncOperation interface. Update to clarify: the no private fields rule is only limited to structs exposed to WinRT, and it does not apply to classes.

    UI Programming

    When it comes to your UI selection, you can either use HTML with CSS to style your app or you can use XAML UI.

    To make it easy for HTML apps to adhere to the Metro UI style and interaction model, Microsoft distributes Javascript and CSS files that you can consume from your project. Notice that this wont work on the public web. As soon as you use any WinRT APIs, your application is a Windows app, and wont run in a standalone web browser.

    .NET and C++ developers get to use XAML instead.

    There is clearly a gap to be filled in the story. It should be possible to use Microsoft's Razor formatting engine to style applications using HTML/CSS while using C#. Specially since they have shown the CLR running on their HTML/JS Metro engine.

    Right now HTML and CSS is limited to the Javascript use.

    In Short

    Microsoft has created a cool new UI library called WinRT and they have made it easy to consume from .NET, Javascript and C++ and if you adhere by their guidelines, they will publish the app on their appstore.

    Xamarin at BUILD

    If you are at build, come join us tonight at 6:30 at the Sheraton Park hotel, just after Meet the Experts. Come talk about Mono, Xamarin, MonoTouch, MonoDroid and MonoMac and discuss the finer points of this blog over an open bar.

    Comments

    There is a long list of comments in the moderation queue that are not directly related to WinRT, or bigger questions that are not directly related to WinRT, .NET and this post's topic, so I wont be approving those comments to keep things on focus. There are better forums to have discussions on Metro.

  • by Miguel de Icaza (miguel@gnome.org) at September 16, 2011 06:03 AM

    September 14, 2011

    Miguel de Icaza

    Xamarin and Mono at the BUILD Conference

    Continuing our tradition of getting together with Mono users at Microsoft conferences, we are going to be hosting an event at the Sheraton Hotel next to the conference on Thursday at 6:30pm (just after Ask the Experts).

    Come join us with your iOS, Android, Mac and Linux questions.

    by Miguel de Icaza (miguel@gnome.org) at September 14, 2011 09:07 PM

    September 08, 2011

    Miguel de Icaza

    MonoDevelop 2.6 is out

    Lluis just released the final version of MonoDevelop 2.6.

    This release packs a lot of new features, some of my favorite features in this release are:

    • Git support.
      • It not only provides the regular source code control commands, it adds full support for the various Git idioms not available in our Subversion addin.
      • Based on Java's JGit engine
      • Ported to C# using db4Object's sharpen tool. Which Lluis updated significantly
      • Logging and Blaming are built into the editor.
    • Mac support:
      • Our fancy MonoMac support lets you build native Cocoa applications. If you have not jumped into this Steve Jobs Love Fest, you can get started with our built-in templates and our online API documentation.
      • Native File Dialogs! We now use the operating system file dialogs, and we even used our own MonoMac bindings to get this done.
      • You can also check my Mac/iOS-specific blog for more details.
    • Unified editor for Gtk#, ASP.NET, MonoTouch and MonoDroid: we no longer have to track various forks of MonoDevelop, they have all converged into one tree.

    The above is just a taste of the new features in MonoDevelop 2.6. There are many more nominate your own!

    Congratulations to the MonoDevelop team on the great job they did!

    And I want to thank everyone that contributed code to MonoDevelop, directly or indirectly to make this happen.

    by Miguel de Icaza (miguel@gnome.org) at September 08, 2011 02:11 AM

    September 06, 2011

    Miguel de Icaza

    Learning Unix

    As I meet new Unix hackers using Linux or Mac, sometimes I am surprised at how few Unix tricks they know. It is sometimes painful to watch developers perform manual tasks on the shell.

    What follows are my recommendations on how to improve your Unix skills, with a little introduction as to why you should get each book. I have linked to each one of those books with my Amazon afiliates link, so feel free to click on those links liberally.

    Here is the list of books that programmers using Unix should read. It will only take you a couple of days to read them, but you will easily increase your productivity by a whole order of magnitude.

    The Basics

    The Unix Programming Environment by Kernighan and Pike is a must-read. Although this is a very old book and it does not cover the fancy new features in modern versions of Unix, no other book covers in such beauty the explanation of the shell quoting rules, expansion rules, shell functions and the redirection rules.

    Every single thing you do in Unix will use the above in some form or shape, and until you commit those to memory you will be a tourist, and not a resident.

    Then you will learn sed and basic awk, both tools that you will use on a daily basis once you become proficient. You do not have to ever be scared of sed or regular expressions anymore.

    Save yourself the embarrassment, and avoid posting on the comments section jwz's quote on regular expressions. You are not jwz.

    It will take you about a week of commuting by bus to read it. You do not have to finish the book, you can skip over the second part.

    Unix Boot Camp

    While Kernighan's book is basic literacy, you need to develop your muscles and you need to do this fast and not buy a book so thick and so packed with ridiculous screenshots that you will never get past page 20.

    Get UNIX for the Impatient. This book is fun, compact and is packed with goodies that will make you enjoy every minute in Unix.

    Learn Emacs

    Emacs has had a strong influence in Unix over the years. If you learn to use Emacs, you will automatically learn the hotkeys and keybindings in hundreds of applications in Unix.

    The best place to learn Emacs is to launch Emacs and then press Control-h and then t. This is the online tutorial and it will take you about two hours to complete.

    The knowledge that you will gain from Emacs will be useful for years to come. You will thank me. And you will offer to buy me a beer, which I will refuse because I rather have you buy me a freshly squeezed orange juice.

    Tooting my own horn

    Learn to use the Midnight Commander.

    The Midnight Commander blends the best of both worlds: GUI-esque file management with full access to the Unix console.

    The Midnight Commander is a console application that shows 2 panels listing two different directories side-by-side and provides a command line that is fed directly to the Unix shell.

    The basics are simple: use the arrow keys to move around, Control-S to do incremental searches over filenames, Control-t to tag or untag files and the F keys to perform copy, move or delete operations. Copy and Move default to copy to the other panel (which you can conveniently switch to by pressing the tab key).

    There is no better way of keeping your file system organized than using my file manager.

    Becoming a Power User

    If you can not quench your thirst for knowledge there is one last book that I will recommend. This is the atomic bomb of Unix knowledge.

    Unix Power Tools is a compilation of tricks by some of the best Unix users that got compiled into a huge volume. This is a book of individual tricks, each about a page long, ideal to keep either on your bedside or in the restoom to pick a new trick every day.

    Mavis Beacon

    At this point you might be thinking "I am awesome", "the world is my oyster" and "Avatar 3D was not such a bad movie".

    But unless you touch-type, you are neither awesome, nor you are in a position to judge the qualities of the world as an oyster or any James Cameron movies.

    You have to face the fact that not only you are a slow typist, you do look a little bit ridiculous. You are typing with two maybe three fingers on each hand and you move your head like a chicken as you alternate looking at your keyboard and looking at your screen.

    Do humanity a favor and learn to touch type.

    You can learn to touch type in about three weeks if you spend some two to three hours per day using Mavis Beacon Teaches Typing.

    Mavis Beacon costs seventeen dollars ($17). Those seventeen dollars and the sixty three hours you will spend using it will do more to advance your carreer than the same sixty three hours spend reading editorials on Hacker News.

    Classics

    All of the books I list here have stood the test of time. They were written at a time when books were designed to last a lifetime.

    Unlike most modern computer books, all of these were a pleasure to read.

    by Miguel de Icaza (miguel@gnome.org) at September 06, 2011 06:45 PM

    September 05, 2011

    The Voidspace Techie Blog

    matplotlib and numpy for Python 2.7 on Mac OS X Lion

    Unfortunately, due to an API change, the latest released version of matplotlib is incompatible with libpng 1.5. Take a wild guess as to which version comes with Mac OS X Lion. ... [275 words]

    September 05, 2011 12:18 AM

    August 31, 2011

    Dino Viehland

    Announcing Python Tools for Visual Studio 1.0

    As you can see from Soma's blog on Monday we released Python Tools for Visual Studio 1.0.  Of course I'm excited about this release as it's been the 1st stable release I've put since IronPython 2.6 (and the first blog too!).  This release of PTVS focuses on a combination of the core IDE experience (intellisense, debugging, profiling, code navigation, etc..) as well as a set of features which target Technical / High Performance Computing.  That includes support for MPI cluster debugging and integrated IPython support. 

    PTVS has been a long time in the making and it represents the fruition of a lot of effort here at Microsoft to produce a Python IDE.  This actually goes back a long time starting it's development several years ago on the IronPython team.  Back then we had done several small projects to figure out what we'd want to do in the Python IDE space.  That included a couple of attempts of building a stand alone IDE using the same components Visual Studio is built upon as a few different attempts at extending Visual Studio to add Python support (some of this having seen the light of day in the form of IronPython Studio and the Python integration which ships w/ the VS SDK).  Ultimately we were able to re-use bits and pieces from all of these attempts and release IronPython Tools for Visual Studio w/ the Alpha of IronPython 2.7. 

    But we needed one last push to turn PTVS into what you see today - and that final push brought support for more than just IronPython and we now have turned Visual Studio into a general purpose Python IDE.  Whatever version of Python you'd like to use I think you'll find that PTVS provides a great experience - whether you're using traditional CPython or IronPython (which we still have special support for including WPF designer support) or another Python distribution such as the speedy PyPy.  The only feature which doesn't currently work across Python distributions is the profiling support which for performance reasons is tied to the CPython embedding API.

    Anyway, if you're looking for doing Python development on Windows I hope you'll give PTVS a shot and let us know what you think.

     

     

    by DinoV at August 31, 2011 08:25 PM

    August 17, 2011

    The Voidspace Techie Blog

    mock 0.8 beta 3 released: feature complete

    I've released mock 0.8 beta 3. You can download it it or install it with: pip install -U mock==dev mock is a library for testing in Python. ... [534 words]

    August 17, 2011 11:35 PM

    August 04, 2011

    Miguel de Icaza

    And we are back: Mono 2.10.3

    This is Xamarin's first official Mono release.

    This is a major bug fix release that addresses many of the problems that were reported since our last release back on April 25th.

    The detailed release notes have all the details, but the highlights of this release include:

    • MacOS X Lion is supported: both the Mono runtime and Gtk+ as shipped with Mono have been updated to run properly on Lion. This solves the known problems that users had running MonoDevelop on MacOS X.
    • Vastly improved WCF stack
    • Many bug fixes to our precise garbage collector.

    Major features continue to be developed in the main branch. Currently we are just waiting for the C# 5.0 Asynchronous Language support to be completed to release that version.

    Mono 2.10.3 also serves as the foundation for the upcoming Mono for Android 1.0.3 and MonoTouch 4.1.

    You can get it from Mono's Download Site.

    Currently we offer source code, Windows and MacOS packages. We will publish Linux packages as soon as we are done mirroring the contents of the old site that contains the Linux repositories.

    On C# 5.0

    Our new compiler, as you might know, has been rewritten to support two backends: a System.Reflection.Emit backend, and the brilliant IKVM.Reflection backend.

    The C# 5.0 support as found on master contains the C# 5.0 support as shipped by Microsoft on their latest public release.

    To try it out, use -langversion:future when invoking the compiler. You can try some of our samples in mono/mcs/tests/test-async*.cs

    by Miguel de Icaza (miguel@gnome.org) at August 04, 2011 10:00 PM

    The Voidspace Techie Blog

    mock 0.8 beta 2:bug fix and side_effect iterables

    I've released mock 0.8 beta 2. You can download it it or install it with: pip install -U mock==dev mock is a library for testing in Python. ... [459 words]

    August 04, 2011 10:48 AM

    July 25, 2011

    The Voidspace Techie Blog

    mock 0.8 beta 1: easier asserts for multiple and chained calls

    I've released mock 0.8 beta 1. You can download it it or install it with: pip install -U mock==dev mock is a library for testing in Python. ... [1376 words]

    July 25, 2011 11:02 AM

    July 20, 2011

    Miguel de Icaza

    MonoDevelop on Lion

    We here at Xamarin are as excited as you are about the release of Lion. But unfortunately we're not quite ready to support you on Lion yet, and MonoDevelop doesn't work quite right. We're working around the clock to make MonoDevelop work perfectly on Lion, and we'll let you know as soon as it's ready.

    Update on July 29th: We have most of the fixes in place for Mono and will issue a build for testing on the Alpha channel soon.

    by Miguel de Icaza (miguel@gnome.org) at July 20, 2011 10:16 PM

    Aaron Marten's WebLog

    Visual Studio @ UserVoice

    We now have an official site for Visual Studio on UserVoice! Please use this as a way to send suggestions and feature requests to the Visual Studio team. For specific bugs and errors, please continue to use Microsoft Connect.

    http://visualstudio.uservoice.com

    image

    by Aaron Marten at July 20, 2011 04:53 PM

    July 18, 2011

    Miguel de Icaza

    Novell/Xamarin Partnership around Mono

    I have great news to share with the Mono community.

    Today together with SUSE, an Attachmate Business Unit, we announced:

    • Xamarin will be providing the support for all of the existing MonoTouch, Mono for Android and Mono for Visual Studio customers.
    • Existing and future SUSE customers that use the Mono Enterprise products on their SLES and SLED systems will continue to receive great support backed by the engineering team at Xamarin.
    • Xamarin obtained a perpetual license to all the intellectual property of Mono, MonoTouch, Mono for Android, Mono for Visual Studio and will continue updating and selling those products.
    • Starting today, developers will be able to purchase MonoTouch and Mono for Android from the Xamarin store. Existing customers will be able to purchase upgrades.
    • Xamarin will be taking over the stewardship of the Mono open source community project. This includes the larger Mono ecosystem of applications that you are familiar with including MonoDevelop and the other Mono-centric in the Mono Organization at GitHub.

    We are a young company, but we are completely dedicated to these mobile products and we can not wait to bring smiles to every one of our customers.

    Roadmaps

    Our immediate plans for both MonoTouch and Mono for Android is to make sure that your critical and major bugs are fixed. We have been listening to the needs of the community and we are working to improve these products to meet your needs. You can expect updates to the products in the next week.

    In the past couple of months, we have met with some of our users and we have learned a lot about what you wanted. We incorporated your feature requests into our products roadmaps for both the MonoTouch and the Mono for Android products.

    Another thing we learned is that many companies need to have a priority support offering for this class of products, so we have introduced this. It can be either be purchased when you first order MonoTouch or Mono for Android, or you get an upgrade to get the priority support.

    Next Steps

    Our goals are to delight software developers by giving them the most enjoyable environment, languages and tools to build mobile applications.

    We are thankful to everyone that provided feedback to us in our online form that we published a month ago. Please keep your feedback coming, you can reach us at contact@xamarin.com. We are reading every email that you send us and you can use my new miguel at new company dot com email address to reach me.

    We will be at the Monospace conference this weekend at the Microsoft NERD Center, hope to see you there!

    Remember to purchase early and often so we have the resources to bring you the best developer tools on the planet.

    by Miguel de Icaza (miguel@gnome.org) at July 18, 2011 08:27 PM

    The Voidspace Techie Blog

    Mock subclasses and their attributes

    This blog entry is about creating subclasses of mock.Mock. mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects. ... [341 words]

    July 18, 2011 05:22 PM

    July 16, 2011

    The Voidspace Techie Blog

    Mock 0.8 alpha 2: patch.multiple, new_callable and non-callable mocks

    I've released mock 0.8 alpha 2. You can download it it or install it with: pip install -U mock==dev mock is a library for testing in Python. ... [496 words]

    July 16, 2011 03:36 PM

    July 14, 2011

    Hex Dump

    I am speaking at PyCon AU 2011 about CouchDB

    The official schedule for PyCon Australia 2011 has been announced (http://pycon-au.org/2011/conference/schedule/). My talk is the first session after the opening keynote and will be an overview of CouchDB and how you can use it with Python. "CouchDB  (http://couchdb.apache.org/) is an open source, document-oriented NoSQL Database Management Server.It supports queries via views using MapReduce,

    by Mark Rees (noreply@blogger.com) at July 14, 2011 12:40 AM

    July 13, 2011

    Jeff Hardy's Blog (NWSGI)

    Using Downloaded IronPython Modules

    One of Internet Explorer’s many “helpful” features is one that will “taint” any downloaded files as so that the system knows they are from the internet. Honestly, I can’t see what value this feature adds other than breaking CHM files, and preventing IronPython from using downloaded modules.
    This was brought to my attention by Shay Friedman, who was trying to use IronPython.Zlib but couldn’t get it to work. In particular, the error message was misleading:
    IronPython 2.6.1 (2.6.10920.0) on .NET 4.0.30319.1
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import clr
    >>> clr.AddReferenceToFileAndPath('C:\Users\Jeff\Downloads\IronPython.Zlib-2.6-clr4\IronPython.Zlib.dll')
    Traceback (most recent call last):
      File "", line 1, in 
    IOError: System.IO.IOException: file does not exist: C:\Users\Jeff\Downloads\IronPython.Zlib-2.6-clr4\IronPython.Zlib.dll
       at Microsoft.Scripting.Actions.Calls.MethodCandidate.Caller.Call(Object[] args, Boolean& shouldOptimize)
    ...
    >>>
    The file, of course, does exist, so why can’t IronPython find it?
    There are actually a few things that interplay here: first, it must be downloaded with a  browser that taints the file (which I believe are just IE and Chrome), and second, it must be unzipped with Windows’ built in unzipping tools. The built in tools have the interesting property that when unzipping a tainted zip file will also taint all of the unzipped files. Finally, the punchline: .NET will not load an assembly that is tainted.
    So how do we get around this? Well, you can:
    • use a different browser
    • use a different unzipping tool (I highly recommend 7-zip)
    • unblock the zip file prior to unzipping
    To unblock the file, just right click on the zip file, click “Properties”, and click “Unblock”:
    unblock-file
    If you’ve already unzipped the file, you can just unblock the DLL. Depending on where you unzipped the file to, you my need to use an elevated Explorer window. You can also unblock multiple files from the command line.
    This may well affect applications other than IronPython, so it’s just one more thing to watch for.

    by jdhardy (noreply@blogger.com) at July 13, 2011 03:46 PM

    July 06, 2011

    Miguel de Icaza

    Update on Mono

    I have a posted an update on Mono and the upcoming release of Mono 2.12.

    by Miguel de Icaza (miguel@gnome.org) at July 06, 2011 08:45 PM

    July 01, 2011

    Miguel de Icaza

    Mono Consultants

    We are getting flooded with paid support requests for Mono. Developers looking for us to fix bugs in Mono, to do some custom work, to port applications, libraries and adjust Mono for some specific needs.

    But we are trying to be a product company as opposed to a support company.

    We still want to help the Mono user community, and with all of the Mono talent out there, at least we can use this opportunity to get both groups in touch: the users that want custom engineering done, with the talented list of hackers.

    If you are a consultant available to do custom engineering and support for customers, we would love to put you in touch with people that need the custom engineering done. Email us at contact@xamarin.com, in the subject line, specify that you are available for custom engineering, and in the body of the message list both your Mono skills (C# or C coding) and your availability to engage on those gigs.

    We will then get you in touch with users that needs the work done.

    by Miguel de Icaza (miguel@gnome.org) at July 01, 2011 04:21 AM

    June 29, 2011

    Miguel de Icaza

    Xamarin Joy Factory

    Setting up a new company consumes a lot of time. Specially as we are developing as fast as we can not one, but two products: .NET for iPhone and .NET for Android.

    Structurally, we are better off than we were the first time that we built these products. We have more developers working on each product than we did the first time around, so progress is faster. But we also had to swap the developers around: those that wrote Foo, can not work on Foo again. This is just one of the things that we have to do to ensure a clean room implementation.

    Our vision is to create happy developers. We did that in the past by bringing the C# language, garbage collection, LINQ, strongly typed APIs, Parallel FX, intellisense and inline documentation to iPhone and Android developers. And by making it possible for the world's 6 million .NET developers to reuse their skills on the most popular mobile platforms.

    This time around, we are doing even more. We are addressing many of the frustrations that developers had with the old products and making sure that those frustrations go away.

    Nat and myself complement each other very well here. This means that there are a lot of new things that will be present in our offering that we never did in the past.

    There is a new level of polish that those familiar with Nat's previous products had (SUSE Studio, NLD/SLED, Ximian Desktop). Everyone at Xamarin can feel that Nat is hard at work when they noticed that one of the first things Nat did was to engage six design firms and an army of technical writers to ensure that our products go from "Nice" to "Amazing". And that was on his second week as CEO, a lot has happened since.

    I do not want to give away everything that we are doing, it would ruin the surprise, but we are here to deliver joy to programmers everywhere.

    If you are interested in working with us, and making mobile development and .NET development a joy that everyone can enjoy, check out our Jobs page

    Where we are now

    It gives me great pleasure to say that we have elevated the discourse on the iPhone simulator and my Chicken-powered TweetStation is up and running with the new iOS product. The picture on the left is TweetStation powered by MonoTouch, the picture on the right is TweetStation powered by Xamarin's iPhone product:


    TweetStation on MonoTouch

    TweetStation on Xamarin iOS

    Update: TweetStation now starts up on Device! We have the static compiler working!

    We also have the delicious iOS5 APIs exposed as strongly-typed and intellisense-friendly C#. We are now updating the APIs from Beta1 to Beta2, which should be completed today or tomorrow.

    Our Android efforts are moving fast. Only this morning we got Layouts to render on the device. This is a lot of work, as it gets Dalvik to start Mono, and initializes our entire bridge and exercises the C# and Java bridge. In addition, we have identified and fixed a serious problem in the distributed garbage collector.

    We also have a number of surprises for everyone in MonoDevelop, we believe that you guys are going to love the new features for iPhone and Android development.

    There is still a lot of polish left to do. We are working as hard as we can to have Preview releases in your hands, but we feel confident that we will have a great product for sale by the end of the summer. We hope you will all max out your credit cards buying it.

    by Miguel de Icaza (miguel@gnome.org) at June 29, 2011 01:42 AM

    June 21, 2011

    Miguel de Icaza

    June 18, 2011

    Aaron Marten's WebLog

    Temporary Post Used For Theme Detection (0b80dc13-367f-47f4-957b-bf8b55dc5edc - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

    This is a temporary post that was not deleted. Please delete this manually. (72c9b318-73bf-43e4-89ed-ad8cc295876b - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

    by Aaron Marten at June 18, 2011 07:58 PM

    June 14, 2011

    The Voidspace Techie Blog

    mock 0.8 alpha 1: New Features

    This is a long entry, please forgive me. It describes all the new features in mock 0.8.0 alpha 1. The main reason I need to describe it here is that I haven't yet written the documentation. ... [2527 words]

    June 14, 2011 12:13 PM

    June 13, 2011

    The Voidspace Techie Blog

    Mocking Generator Methods

    Another mock recipe, this one for mocking generator methods. A Python generator is a function or method that uses the yield statement to return a series of values when iterated over [1]. ... [267 words]

    June 13, 2011 05:24 PM

    June 12, 2011

    Jeff Hardy's Blog (NWSGI)

    NWSGI 2.1 Now Available

    I’ve finally updated NWSGI to use IronPython 2.7: NWSGI 2.1. The only other change is that NWSGI.dll will be added to the GAC by default by the installer.

    NWSGI 3 Update

    The big feature of NWSGI 3 is decoupling it from IIS and ASP.NET, which involved creating an abstraction layer for web servers (which is funny, because that’s what WSGI is). Shortly after I started that, the OWIN project started, which has essentially the same goal. Since I hate duplicating effort, NWSGI 3 is on hold until OWIN stabilizes, which hopefully shouldn’t be too much longer.

    by jdhardy (noreply@blogger.com) at June 12, 2011 08:25 PM

    June 06, 2011

    The Voidspace Techie Blog

    Another approach to mocking properties

    mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects. ... [512 words]

    June 06, 2011 08:17 PM

    May 30, 2011

    The Voidspace Techie Blog

    mock 0.7.2 released

    There's a new minor release of mock, version 0.7.2 with two bugfixes in it. http://pypi.python.org/pypi/mock/ (download) http://www.voidspace.org.uk/python/mock/ (documentation) https://code.google.com/p/mock/ (repo and issue tracker) mock is a Python library for simple mocking and patching (replacing objects with mocks during test runs). ... [696 words]

    May 30, 2011 08:11 PM

    The Voidspace Techie Blog

    namedtuple and generating function signatures

    Kristjan Valur, the chief Python developer at CCP games (creators of Eve Online), has posted an interesting blog entry about the use of exec in namedtuple. namedtuple is a relatively recent, and extraordinary useful, part of the Python standard library. ... [682 words]

    May 30, 2011 02:00 PM

    The Voidspace Techie Blog

    Nothing is Private: Python Closures (and ctypes)

    As I'm sure you know Python doesn't have a concept of private members. One trick that is sometimes used is to hide an object inside a Python closure, and provide a proxy object that only permits limited access to the original object. ... [482 words]

    May 30, 2011 12:40 PM

    May 29, 2011

    The Voidspace Techie Blog

    Using patch.dict to mock imports

    I had an email from a mock user asking if I could add a patch_import to mock that would patch __import__ in a namespace to replace the result of an import with a Mock. It's an interesting question, with a couple of caveats: Don't patch __import__. ... [522 words]

    May 29, 2011 07:50 PM

    May 25, 2011

    Miguel de Icaza

    Xamarin recruits best CEO in the Industry

    I could not be more excited about this.

    Nat Friedman has joined Xamarin as a company founder and CEO this week.

    Nat and I have known each other and worked together on and off since the early days of Linux. In 1999, we started Ximian to advance the state of Linux, user experience and developer platforms - with many of our efforts brought to fruition after our acquisition by Novell in 2003.

    Anyone that has had the pleasure to work with Nat knows that ideas come in one side, and objects of desire come out on the other end.

    In mobile development, we've discovered a great opportunity: a need for products that developers love. And we are going to fill this need with great products that will make everyone's eyes shine every time they use our software.

    Update: Nat's most recent product was SUSE Studio.

    by Miguel de Icaza (miguel@gnome.org) at May 25, 2011 08:14 PM

    The Voidspace Techie Blog

    Implementing __dir__ (and finding bugs in Pythons)

    A new magic method was added in Python 2.6 to allow objects to customise the list of attributes returned by dir. The new protocol method (I don't really like the term "magic method" but it is so entrenched both in the Python community and in my own mind) is __dir__. ... [1181 words]

    May 25, 2011 10:04 AM

    May 18, 2011

    Hex Dump

    Python Informix Database Connection Options

    I am currently at the International Informix Users Group Conference (http://www.iiug.org/index.php) in Kansas. In the opening keynote by Jerry Keesee, there some discussion about IBM's Open Source Initiatives for Informix. On the accompanying slide, Python and Django were listed. This reminded me that I hadn't taken stock of what the Informix DB connections options were for the Python user lately

    by Mark Rees (noreply@blogger.com) at May 18, 2011 03:58 PM

    May 17, 2011

    Miguel de Icaza

    Announcing Xamarin

    Today we start Xamarin, our new company focused on Mono-based products.

    These are some of the things that we will be doing at Xamarin:

    • Build a new commercial .NET offering for iOS
    • Build a new commercial .NET offering for Android
    • Continue to contribute, maintain and develop the open source Mono and Moonlight components.
    • Explore the Moonlight opportunities in the mobile space and the Mac appstore.

    We believe strongly in splitting the presentation layer from the business logic in your application and supporting both your backend needs with C# on the server, the client or mobile devices and giving you the tools to use .NET languages in every desktop and mobile client.

    Development started early this morning, we will first deliver the iPhone stack, followed by the Android stack, and then the Moonlight ports to both platforms.

    The new versions of .NET for the iPhone and Android will be source compatible with MonoTouch and Mono for Android. Like those versions, they will be commercial products, built on top of the open core Mono.

    In addition, we are going to provide support and custom development of Mono. A company that provides International Mono Support, if you will.

    As usual, your feedback will help us determine which platforms and features are important to you. Help us by filling out our survey. If you give us your email address, we will also add you to our preview/beta list for our upcoming products.

    Fighting for Your Right to Party

    We have been trying to spin Mono off from Novell for more than a year now. Everyone agreed that Mono would have a brighter future as an independent company, so a plan was prepared last year.

    To make a long story short, the plan to spin off was not executed. Instead on Monday May 2nd, the Canadian and American teams were laid off; Europe, Brazil and Japan followed a few days later. These layoffs included all the MonoTouch and MonoDroid engineers and other key Mono developers. Although Attachmate allowed us to go home that day, we opted to provide technical support to our users until our last day at Novell, which was Friday last week.

    We were clearly bummed out by this development, and had no desire to quit, especially with all the great progress in this last year. So, with a heavy dose of motivation from my music teacher, we hatched a plan.

    Now, two weeks later, we have a plan in place, which includes both angel funding for keeping the team together, as well as a couple of engineering contracts that will help us stay together as a team while we ship our revenue generating products.

    Update: although there was a plan to get Angel funding, it turns out that we self-funded the whole thing in the end.

    Next Steps

    Our plan is to maximize the pleasure that developers derive from using Mono and .NET languages on their favorite platforms.

    We do have some funding to get started and ship our initial products. But we are looking to raise more capital to address the shortcomings that we could not afford to do before, these include:

    • Tutorials for our various developer stacks
    • API documentation for the various Mono-specific APIs
    • Dedicated Customer Support Software (assistly or getsatisfaction)
    • Upgrade our Bug system
    • Training
    • Consulting and Support
    • and Marketing: we have a best of breed developer platform, and we need the world to know. Our previous marketing budget is what the ancient Olmec culture referred to as Zero.

    Stay tuned for more, meanwhile, hope to see you in July at the Monospace conference in Boston!

    by Miguel de Icaza (miguel@gnome.org) at May 17, 2011 12:35 AM

    May 13, 2011

    The Voidspace Techie Blog

    Django concurrency, database locking and refreshing model objects

    Using expressions to make some of our model updates atomic (as discussed previously) wasn't sufficient to make all of our operations safe for concurrent database modifications (although still useful). This is because having fetched some values we wanted to perform operations based on those values, and they must not change whilst the operations are taking place (because the end result will be written back and would overwrite any other changes made). ... [1090 words]

    May 13, 2011 10:06 AM

    May 12, 2011

    The Voidspace Techie Blog

    mock 0.7.1 and matching objects in assert_called_with

    I've done a new release of mock, version 0.7.1. There are no code changes, but the new release fixes some packaging issues identified by Michael Fladischer. ... [624 words]

    May 12, 2011 11:20 PM

    May 10, 2011

    Aaron Marten's WebLog

    Visual Studio Extensions and Build Servers

    From time to time, we see questions around building a project created with the Visual Studio 2010 SDK on a build server (e.g. Team Foundation Build, TeamCity, CC.NET, etc…). The primary misconception that folks have is that you must install Visual Studio 2010 + SDK on the build server.

    In this post, I’ll walk through the process of getting a C#/VB VSPackage project up and running on Team Foundation Build, without requiring an install of Visual Studio on the build agent machine. The same steps apply for editor extensions or other extensibility project types.

    Once you’ve configured the build server and are ready to try out a build, you’ll probably see something like the following error in your build log:

    The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\VSSDK\Microsoft.VsSDK.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

    Step #1: Put Visual Studio SDK targets/tasks in source control

    Since neither Visual Studio nor the Visual Studio SDK are installed on my build machine, the build complains about the missing Microsoft.VsSDK.targets file. This is simple enough to fix by doing the following:

    1. Create a folder at the root of your solution directory called “vssdk_tools”. We’ll be adding all the necessary targets, tasks, etc… to this folder and adding it to source control.
    2. Copy the contents of %ProgramFiles%\MSBuild\Microsoft\VisualStudio\v10.0\VSSDK into this directory.
    3. Add the contents of your vssdk_tools directory to source control.
      • If you’re using TFS Source Control, you can do this via the “tf add” command or through the Source Control Explorer tool window in Visual Studio 2010.
    4. Edit your project file to point to this new targets file. Change the line:
      <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\VSSDK\Microsoft.VsSDK.targets" />
         to
      <Import Project="..\vssdk_tools\Microsoft.VsSDK.targets" />

    Let’s try checking in again and seeing where we are now:

    image

    Step #2: Put COMReference binaries in source control

    The reason that MSBuild is trying to run AxImp.exe is because we have a collection of COMReference elements in our VSPackage project. Instead of registering these assemblies as COM components on the build server, let’s copy these binaries to our local project and add them as normal assembly references:

    1. Remove the following COMReferences from your project:
      • EnvDTE
      • EnvDTE80
      • EnvDTE90
      • EnvDTE100
      • Microsoft.VisualStudio.CommandBars
      • stdole
    2. Create a “binaries” folder in our VSPackage project
    3. “Add existing item…” on this binaries folder for the following assemblies:
      • %ProgramFiles%\Common Files\Microsoft Shared\MSEnv\PublicAssemblies\EnvDTE.dll
      • %ProgramFiles%\Common Files\Microsoft Shared\MSEnv\PublicAssemblies\EnvDTE80.dll
      • %ProgramFiles%\Common Files\Microsoft Shared\MSEnv\PublicAssemblies\EnvDTE90.dll
      • %ProgramFiles%\Common Files\Microsoft Shared\MSEnv\PublicAssemblies\EnvDTE100.dll
      • %ProgramFiles%\Common Files\Microsoft Shared\MSEnv\PublicAssemblies\Microsoft.VisualStudio.CommandBars.dll
      • %ProgramFiles%\Common Files\Microsoft Shared\MSEnv\PublicAssemblies\stdole.dll
    4. Select all the binary files and set the “Build Action” property to “None”.
    5. Re-add assembly references to the binaries you just added.
    6. Important: Select all the references and set the “Embed Interop Types” property to false. (You can select and change them all in one operation.)

    Let’s check in and try another build on the server

    image

    Step #3: Manually set the VsSDKInstall locations

    Let’s take a look at the actual line where we’re hitting the error in Microsoft.VsSDK.Common.targets:

    <Target Name="FindSDKInstallation" Condition="'$(VsSDKInstall)'==''">
      <FindVsSDKInstallation SDKVersion="$(VsSDKVersion)">
        …

    The reason this task needs to run is because the VsSDKInstall property (and friends) hasn’t been set yet. Let’s use the “vssdk_tools” folder we had set up earlier. Edit your project file again, and add the following properties to the first <ProjectGroup> element:
     

    <VsSDKInstall>..\vssdk_tools</VsSDKInstall>
    <VsSDKIncludes>$(VsSDKInstall)\inc</VsSDKIncludes>
    <VsSDKToolsPath>$(VsSDKInstall)\bin</VsSDKToolsPath>

    Clearly, this won’t work until we actually have the corresponding files from the Visual Studio SDK also checked in to those directories. Let’s do that now:

    1. Copy the contents of %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Inc to vssdk_tools\inc
    2. Copy the contents of %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Tools\Bin to vssdk_tools\bin
    3. Add all these new files to source control

    Let’s checkin again and see where we are now:

    image

    Step #4: Set VsSDKToolsPath as an Environment Variable

    Hmmm…this one is a bit tricky. It turns out that some of the VSSDK build tasks rely on not only the $(VsSDKToolsPath) MSBuild property, but they also rely on this being set as an environment variable. We can do that fairly easily with an inline build task which we can add to our project file:

    <UsingTask TaskName="SetVsSDKEnvironmentVariables" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll">
      <ParameterGroup>
        <ProjectDirectory Required="true" />
      </ParameterGroup>
      <Task>
        <Code Type="Fragment" Language="cs">
          System.Environment.SetEnvironmentVariable("VsSDKToolsPath", System.IO.Path.GetFullPath(ProjectDirectory + @"\..\vssdk_tools\bin"));
        </Code>
      </Task>
    </UsingTask>
    <Target Name="SetVsSDKEnvironmentVariables" BeforeTargets="VSCTCompile">
      <SetVsSDKEnvironmentVariables ProjectDirectory="$(MSBuildProjectDirectory)" />
    </Target>

    Let’s cross our fingers and try again:

    image

    Step #5: Use 32-bit MSBuild.exe

    By default, TFS will use the x64 version of MSBuild.exe (assuming you’re on a 64-bit server). Since the VSCT assembly is 32-bit only, it will fail to load in a 64-bit process. To use 32-bit MSBuild.exe on the server (if you’re using Team Foundation Build), simply edit the build definition and change Process => Advanced => MSBuild Platform to “X86” instead of “Auto”.

    One more try:

     

    image

     

    Step #6: Add other VSSDK Assemblies to source control

    In step 3, we only added the COMReferences to source control. Now, let’s do a similar procedure with the other assemblies:

    1. Remove the following assembly references from your project:
      • Microsoft.VisualStudio.OLE.Interop
      • Microsoft.VisualStudio.Shell.10.0
      • Microsoft.VisualStudio.Shell.Immutable.10.0
      • Microsoft.VisualStudio.Shell.Interop
      • Microsoft.VisualStudio.Shell.Interop.10.0
      • Microsoft.VisualStudio.Shell.Interop.8.0
      • Microsoft.VisualStudio.Shell.Interop.9.0
      • Microsoft.VisualStudio.TextManager.Interop
    2. “Add existing item…” on the binaries folder for the following assemblies:
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v2.0\Microsoft.VisualStudio.OLE.Interop.dll
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v2.0\Microsoft.VisualStudio.Shell.Interop.dll
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v2.0\Microsoft.VisualStudio.Shell.Interop.8.0.dll
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v2.0\Microsoft.VisualStudio.Shell.Interop.9.0.dll
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v2.0\Microsoft.VisualStudio.Shell.Interop.10.0.dll
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v2.0\Microsoft.VisualStudio.TextManager.Interop.dll
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v4.0\Microsoft.VisualStudio.Shell.10.0.dll
      • %ProgramFiles%\Microsoft Visual Studio 2010 SDK SP1\VisualStudioIntegration\Common\Assemblies\v4.0\Microsoft.VisualStudio.Shell.Immutable.10.0.dll
    3. Select all the binary files and set the “Build Action” property to “None”.
    4. Re-add assembly references to the binaries you just added.

    One more time…

    image

      Step #7: Add Microsoft.VisualStudio.Shell.Immutable.10.0.dll to the tools directory

      CreatePkgDef.exe is the tool used to create a pkgdef file for your VSPackage. The tool itself relies on types defined in the Microsoft.VisualStudio.Shell.Immutable.10.0 assembly. On a machine with Visual Studio 2010 installed, there isn’t a problem loading it since the assembly is installed to the GAC. However, on our build server, the assembly is not in the GAC since Visual Studio 2010 isn’t installed.

      In order to allow CreatePkgDef.exe to find the assembly, we can simply add a copy of this binary in our vssdk_tools\bin directory. Do the following:

      1. Copy Microsoft.VisualStudio.Shell.Immutable.10.0.dll from our project binaries folder to vssdk_tools\bin.
      2. Add this new file to source control and checkin

      image

      Step #8: Add the VSIXManifestSchema.xsd to allow VsixManifest validation on the build server

      This task fails because the build task can’t locate the XML schema file for VSIXManifest to do schema validation. We could just switch this task off, but since it’s a good idea to run this validation when we build, let’s do what’s necessary to enable validation. There is an MSBuild property we can set to override this location on our build server. Simply add the following property to the first <PropertyGroup>:

      <VsixSchemaPath>$(VsSDKInstall)\schemas\VSIXManifestSchema.xsd</VsixSchemaPath>

      Of course, we also need to add the schema file to this directory:

      1. Copy the VSIXManifestSchema.xsd file from “%ProgramFiles%\Microsoft Visual Studio 10.0\Xml\Schemas” to vssdk_tools\schemas.
      2. Add VSIXManifestSchema.xsd to source control

      Let’s try again and see where we are:
      image

      Step #9: Disable deployment to the Experimental Instance

      To make ‘F5’ debugging work without any work by the user, by default, there are some additional targets that run in Microsoft.VsSDK.Common.targets. These targets ‘deploy’ your extension’s files to the Experimental instance for debugging. Since this scenario doesn’t make sense for our build server, we should disable it.

      The Visual Studio SDK includes a project property page for configuring this property:

      image

      Note that you will probably want a separate build configuration for your build server (to set this property to false) so that developers can still easily debug their package on a client machine.

      If you prefer to configure this directly in your project file instead of using the UI, use the following property:

      <DeployExtension>False</DeployExtension>

      Let’s see how this affects our build:

      image

      Success!!

      Hooray! If I check the build output directory, we now see that we have a VSIX file that was built on the server:

      image

      by Aaron Marten at May 10, 2011 06:35 PM

      May 06, 2011

      The Voidspace Techie Blog

      Danger with django expression objects

      I've recently been dealing with a bunch of concurrency issues in our django app. Some of the views modify database rows (model objects), which is unsurprising, but we did have a few situations where concurrent modifications could cause modifications to be overwritten (silently lost). ... [467 words]

      May 06, 2011 02:58 PM

      May 05, 2011

      IronPython Cookbook (New Entries)

      Endeavor launch postponed at last minute Faulty heaters 5

      TheophilaThomason: Created page with 'thumb| Cushions Correct maintenance of your patio furniture will ensure that it remains in beneficial condition with long time. Patio furniture…'


      [[Image:patio_heaterz_2186.jpg|thumb|]]

      Cushions

      Correct maintenance of your patio furniture will ensure that it remains in beneficial condition with long time. Patio furniture cushions may be waterproof, yet this doesn't mean they don't need to be cleaned. Semi-annual cleaning of patio furniture cushions will retain them serviceable for any long time.

      Trouble:
      Moderately Easy

      Instructions

      things you'll need:

      1 Remove the cushions from the patio furniture plus acquire them to any flat surface.

      2 Wipe down the cushion by a damp towel to remove surface dirt.

      3 Prepare a solution of warm water plus 1 tsp. dish detergent. Dip your scrub brush within the answer and scrub the the front and the back of your [http://eng.utah.edu/~agardner/cs4500/index.php?n=PatioHeater.PatioHeater patio] cushion from some circular motion.

      4 Wash your patio cushions. For stains that were not removed by general cleaning, mix a paste of 1 component drinking water and 2 elements baking soda. Work this into the stain with a circular motion with your scrub brush or a toothbrush. Rinse and allow the cushions to air dry.

      5 Clean the cushions before putting them out for the summer also prior to storing them with the winter. Feel-ups need to be completed to remove stains as they appear. Merely make a smaller solution of soap and water, use a sponge to scrub, and rinse.

      Flip patio cushions occasionally.
      Do not use bleach on patio furniture cushions.
      Create never place cushions back on the furniture previous to they dry fully.

      Print
      Email
      Share

      Comments

      by TheophilaThomason at May 05, 2011 08:42 PM

      April 29, 2011

      Aaron Marten's WebLog

      PerfWatson – Automatically report responsiveness issues in Visual Studio 2010

      We’ve just released a new extension on the Visual Studio Gallery called PerfWatson. Have you ever seen this dreaded error message?

      Microsoft Visual studio is waiting for an internal operation to complete. If you regularly encounter this delay during normal usage, please report this problem to Microsoft.

      Well, now you actually can report these problems to Microsoft…automatically. Here’s a description of the extension from the Visual Studio Gallery page:

      “We’re constantly working to improve the performance of Visual Studio and take feedback about it very seriously. Our investigations into these issues have found that there are a variety of scenarios where a long running task can cause the UI thread to hang or become unresponsive. Visual Studio PerfWatson is a low overhead telemetry system that helps us capture these instances of UI unresponsiveness and report them back to Microsoft automatically and anonymously. We then use this data to drive performance improvements that make Visual Studio faster.

      Here’s how it works: when the tool detects that the Visual Studio UI has become unresponsive, it records information about the length of the delay and the root cause, and submits a report to Microsoft. The Visual Studio team can then aggregate the data from these reports to prioritize the issues that are causing the largest or most frequent delays across our user base. By installing the PerfWatson extension, you are helping Microsoft identify and fix the performance issues that you most frequently encounter on your PC.”

      I’d strongly encourage you to install PerfWatson if you’re frustrated with seemingly random UI hangs in Visual Studio. This extension won’t fix the issues, but it will help us see where the real-world responsiveness issues are to help improve future releases.

      by Aaron Marten at April 29, 2011 10:41 PM

      April 19, 2011

      Miguel de Icaza

      Dropbox Lack of Security

      I am a fan of Dropbox. It is a great tool, a great product, and clearly they have a passionate team over at Dropbox building the product.

      Dropbox recently announced an update to its security terms of service in which they announced that they would provide the government with your decrypted files if requested to do so.

      This is not my problem with Dropbox.

      My problem is that for as long as I have tried to figure out, Dropbox made some bold claims about how your files were encrypted and how nobody had access to them, with statements like:

      • All transmission of file data occurs over an encrypted channel (SSL).
      • All files stored on Dropbox servers are encrypted (AES-256)
      • Dropbox employees aren't able to access user files, and when troubleshooting an account they only have access to file metadata (filenames, file sizes, etc., not the file contents)

      But anyone that tried to look further came out empty handed. There really are no more details on what procedures Dropbox has in place or how they implement the crypto to prevent unauthorized access to your files. We all had to just take them at their word.

      This wishy-washy statement always made me felt uneasy.

      But this announcement that they are able to decrypt the files on behalf of the government contradicts their prior public statements. They claim that Dropbox employees aren't able to access user files.

      This announcement means that Dropbox never had any mechanism to prevent employees from accessing your files, and it means that Dropbox never had the crypto smarts to ensure the privacy of your files and never had the smarts to only decrypt the files for you. It turns out, they keep their keys on their servers, and anyone with clearance at Dropbox or anyone that manages to hack into their servers would be able to get access to your files.

      If companies with a very strict set of security policies and procedures like Google have had problems with employees that abused their privileges, one has to wonder what can happen at a startup like Dropbox where the security perimeter and the policies are likely going to be orders of magnitude laxer.

      Dropbox needs to come clear about what privacy do they actually offer in their product. Not only from the government, but from their own employees that could be bribed, blackmailed, making some money on the side or are just plain horny.

      Dropbox needs to recruit a neutral third-party to vouch for their security procedures and their security stack that surrounds users' files and privacy. If they are not up to their own marketed statements, they need to clearly specify where their service falls short and what are the potential security breaches that

      Unless Dropbox can prove that algorithmically they can protect your keys and only you can get access to your files, they need to revisit their public statements and explicitly state that Dropbox storage should be considered semi-public and not try to sell us snake oil.

      by Miguel de Icaza (miguel@gnome.org) at April 19, 2011 09:10 AM

      Miguel de Icaza

      Save the Date: Monospace Conferece in Boston

      The dates for the MonoSpace conference have been announced: July 23rd to 25th, 2011. The event will take place at the Microsoft NERD Center.

      The organizers have just made a call for speakers. If you have an interesting topic to discuss, please submit a talk, we would love to hear from you.

      by Miguel de Icaza (miguel@gnome.org) at April 19, 2011 08:34 AM

      April 06, 2011

      Miguel de Icaza

      Mono Android and iPhone Updates

      Today we are happy to release Mono for Android 1.0 as well as MonoTouch 4.0.

      Both products allow you to use the C# language to write applications that run on Android and iOS devices.

      Both products are based on the latest Mono 2.10 core. The Parallel Frameworks can be used to write more elegant multi-threaded code across all devices, and automatically takes advantage of multiple cores available on the iPad2 and Xoom devices. The C# 4.0 is now the default as well as the .NET 4.0 APIs.

      Mono for Android

      Our Mono for Android debuts today after almost a year worth of development.

      Perhaps the most important lesson that we got from MonoTouch's success was that we had to provide a completely enabled platform. What we mean by this is that we needed to provide a complete set of tools that would assist developers from creating their first Android application, to distributing the application to the market place, to guides, tutorials, API documentation and samples.

      Mono for Android can be used from either Visual Studio Professional 2010 for Windows users, or using MonoDevelop on the Mac.

      Mono code runs side-by-side the Dalvik virtual machine in the same process:

      This is necessary since code running in Dalvik provides the user interface elements for Android as well as the hosting and activation features for applications on Android.

      APIs

      The Mono for Android API is made up of the following components: Core .NET APIs, Android.* APIs, OpenGL APIs and Java bridge APIs.

      Let us start with the most interesting one: Android.* APIs. These are basically a 1:1 mapping to the native Java Android APIs but they have been C#-ified, for example, you will find C# properties instead of set/get method calls, and you will use C# events with complete lambda support (with variables being automatically captured) instead of Java inner classes. This means that while in Java you would write something like:

      	// Java code
      	button.setOnClickListener (new View.OnClickListener() {
                   public void onClick(View v) {
      		button.setText ("Times clicked: " + Integer.toString(counter));
                   }
               });
      	
      	// C# code
      	button.Click += delegate {
      		button.Text = "Times clicked: " + counter;
      	};
      	

      In addition to the UI APIs, there are some 57 Android.* namespaces bound that provide access to various Android features like telephony, database, device, speech, testing and many other services.

      In what is becoming the standard in the Mono world, OpenGL is exposed through the brilliant OpenTK API. OpenTK is a strongly typed, Framework Design Guidelines-abiding binding of OpenGL. The benefit is that both Visual Studio and MonoDevelop can provide intellisense hints as you develop for the possible parameters, values and their meaning without having to look up the documentation every time.

      Finally, for the sake of interoperability with the native platform, we exposed many types from the Java.* namespaces (31 so far) that you might need if you are interoperating with third party libraries that might require an instance of one of those Java.* types (for example, a crypto stack might want you to provide a Javax.Crypto.Cipher instance. We got you covered.

      Core Differences

      Mono for Android has a few differences from MonoTouch and Windows Phone 7 when it comes to the runtime. Android supports JIT compilation while iOS blocks it at the kernel level and Windows Phone 7 has limitations.

      This means that developers using Mono on Android have complete access to System.Reflection.Emit. This in turn means that generics-heavy code like F# work on Android as do dynamic languages powered by the Dynamic Language Runtime like IronPython, IronRuby and IronJS.

      And of course, you can also use our own C# Compiler as a Service

      Now, although those languages can run on Mono for Android, we do not currently have templates for them. The Ruby and Python support suffer due to Android limitations. The Dalvik virtual needs to know in advance which classes you customize, and since it is not really possible to know this with a dynamic language, the use of Iron* languages is limited in that they cant subclass Android classes. But they can still call into Android APIs and subclass as much .NET class libraries as they want.

      Native User Interfaces

      MonoTouch and MonoDroid share a common runtime, a common set of class libraries, but each provides different user interface and device specific APIs.

      For example, this code takes advantage of iOS's UINavigationController and animates the transition to a new state in response to a user action:

      void OnSettingsTapped ()
      {
      	var settings = new SettingsViewController ();
      	PushViewController (settings, true);
      }
      	

      This is an equivalent version for Mono for Android:

      void OnSettingsTapped ()
      {
      	var intent = new Intent ();
      	intent.SetClass (this, typeof (SettingsActivity));
      	StartActivity (intent);
      }
      	

      We chose to not follow the Java write-once-run-anywhere approach for user interfaces and instead expose every single bit of native functionality to C# developers.

      We felt that this was necessary since the iOS and Android programming models are so different. We also wanted to make sure that everything that is possible to do with the native APIs on each OS continues to be possible while using Mono.

      For instance, if you want to use CoreAnimation to drive your user interactions, you should be able to leverage every single bit of it, without being forced into a common denominator with Android where nothing similar to this is available.

      Craig Dunn, one of the authors of the MonoTouch Programming Book, has written a nice Mosetta Stone document that compares side-by-side some of the key UI differences across platforms.

      He also has written the Restaurant Guide Sample which sports a unique user interface for Android, iOS and Windows Phone 7:

      You can take a look at this cross platform sample from GitHub.

      Split your Presentation from your Engine

      Faced with the diversity of platforms to support, both mobile and desktop, this is a good time to design, refactor and prepare your code for this new era.

      Today developers can use C# to target various UIs:

      To give your code the most broad reach, you should consider splitting your backend code from your presentation code. This can be done by putting reusable code in shared libraries (for example, REST clients) and shared business logic on its own libraries.

      By splitting your presentation code from your business logic code for your application, not only you gain the ability to create native experiences in each platform, you also get a chance to test your business logic/shared libraries more easily.

      Linking

      In Mono for Android when you build an application for distribution, we embed the Mono runtime with your application. This is necessary so your application is entirely self-contained and does not take any external dependencies.

      Mono for Android uses the Mono Linker to ensure that only the bits of Mono that you actually use end up in your package and that you do not pay a high tax for just using a handful of functions.

      For example, if you want to just use a method from XElement, you would only pay the price for using this class and any of its dependencies. But you would not end up bringing the entire System.XML stack: you only pay for what you use.

      During development a different approach is used: the Mono runtime is installed on your emulator or test device as a shared runtime. This minimizes both the build and deploy times.

      Mono for Android References

      Start with our documentation portal, there you will find our Installation Guide, a tutorial for your first C# Android application, our tutorials (many ported from their Java equivalents) and our How-To Guides and a large collection of sample programs.

      You can also explore the documentation for the Mono for Android API in a convenient to remember url: docs.mono-android.net.

      The first book of Mono for Android will be available on July 12th. In the meantime, we have created many tutorials and guides that will help you go

      I also strongly suggest those interested in parallel programming to check out the Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4. This is a free PDF, and is a must-read for anyone building multi-core applications.

      Thank You!

      Mono for Android would not have been possible without the hard work of the MonoDroid team at Novell. The team worked around the clock for almost a year creating this amazing product.

      The team was backed up by the Mono core team that helped us get C# 4.0 out, WCF, the linker, the LLVM support, improve the VM, extend the MonoDevelop IDE, scale Mono, improve our threadpool, support OpenTK, implement the Parallel Frameworks, ship dozens of betas for MonoDevelop, Mono and Mono for Android.

      by Miguel de Icaza (miguel@gnome.org) at April 06, 2011 11:40 PM

      April 02, 2011

      Dave Fugate (Testing IronPython)

      How I Lost 25 Pounds in a Month Without Exercising

      How Things Got So Bad
      Since joining Microsoft back in mid-2006, my weight skyrocketed about 15% or 30 pounds. A large part of this can be attributed to the abundance of unhealthy, free food at Microsoft, but that’s only half of the story. When I worked at University of Calgary, I had to walk at least a mile and a half each day to get to and from various transit points. Now, the parking lot is at best 50 meters away from my building. While I might have complained about walking around in -20 degree weather in Canada, I really had no idea how good it was for my health. Up until recently, I didn’t know just how bad Microsoft’s free perks such as unlimited soda and the constant supply of junk food outside co-workers' offiices were really harming me either. That said, my health is my own responsibility and I shouldn't have listened to the demons in my head that kept telling me to eat more.

      The Scare
      Flash forward to January 2011. An annual check-up revealed that my “fatty-liver” condition (human equivalent of foie gras) diagnosed in 2008 had progressed such that I now have either gallstones or possibly even a growth in my gallbladder. I was only 30 at the time! Any ways, this was exactly the ‘scare’ I needed for a major lifestyle change. My amazing wife letting me know that my snoring had gotten far worse since moving to Seattle wasn’t enough. Fear of cancer and Microsoft’s announcement that our 100% healthcare would disappear in two short years skyrocketed me into action.

      Douglas Crockford
      By chance, I came across a wonderful blog post, http://www.crockford.com/pwl/, by Douglas Crockford which explained our society’s current obesity epidemic and gave some awesome advice on losing weight. If you haven’t read this post before, I highly recommend it as it’s quite logical and well thought-out. 
       
      Dr. Sandra Cabot  
      Concurrently, I was also trying to abide by Dr. Sandra Cabot’s advice she gave in her book, the Liver Cleanse Diet. The basic premise of the book is that the liver is solely responsible for removing fat from the bloodstream, and an unhealthy liver implies you’ll pack on the pounds. Well, the way my “fatty-liver” was diagnosed was via blood tests looking for chemicals the liver releases when it’s under duress.
       
      The Lifestyle Changes
      What exactly did I do to lose the 25 pounds you ask? It was a combination of the Liver Cleanse Diet, Doug’s advice, and strong support from my wife:
      • No more sausage biscuits for breakfast. Instead, my wife or I typically juice apples/carrots/celery/kiwi/etc. or eat a bowl of oatmeal followed by a cup of black coffee
      • Replaced Indian food, pizza, and burgers for lunch with either a low-fat salad or a turkey (Subway) sub. The former set is incredibly high in fat
      • Take double the daily recommended amount of Milk Thistle, a herb purported to protect the liver
      • I used to eat 90% meat/cheese/dairy/flour for every meal, and perhaps 10% were fruits and vegetables. Now more than 50% of my food intake comes from fruits and vegetables
      • Portion control, portion control, portion control. My wife’s culture “loves you with food” which needed to change to “love you with less food”
      • Limited my intake of fats to those found in avocadoes, nuts, and lean poultry. It was hard to give up cheese and red meat, but it has paid off
      • Severely restricted my intake of alcohol. While it doesn’t necessarily add fat to my bloodstream, it does hurt one’s liver. Over the course of three months I’ve had a grand total of four beers
      • Severely restricted my intake of refined sugar. Had only one coke in three months and all the oatmeal cookies I’ve eaten have been low-sugar and low-fat. Have had a few peanut butter and honey sandwiches though
      • For two weeks I took a commercial (As Seen on TV) product called the “The Cleaner”. Basically just pop some pills every day and have weird looking bowel movements
      • This is perhaps the most difficult, yet also the most important facet of my diet – do a gallbladder cleanse. After finishing “The Cleaner”, one evening I downed 1.25 cups of the finest cold-pressed olive oil I could find chased by 1 cup of fresh lemon juice; all over the course of three hours. If you want to do this, be forewarned the next day will not be fun by any means. I didn’t *really* start shedding weight like crazy until after the gallbladder cleanse
      • I even jumped off the wagon for four days while on a business trip last month and have lost five pounds since then!
       
      The Benefits
      Now onto the benefits I’ve seen thus far:
      • the look of shock from people who haven’t seen me in a while
      • a recent blood test indicates my liver function is back to normal
      • a sleep study performed after I’d lost about 15 pounds showed I was no longer snoring excessively nor breathing incorrectly
      • my brain is operating at a frequency I quite honestly haven’t experienced since 2004
      • far less tired yet I’ve also been getting less sleep (i.e., maybe six hours a night)
      • lost 31 pounds to date. After another five I plan on relaxing the diet just a bit

      April 02, 2011 08:02 PM

      March 31, 2011

      Miguel de Icaza

      Mono and Google Summer of Code

      We have been lucky enough that Google accepted Mono as a mentoring organization for the Google Summer of Code 2011

      This is a great opportunity for students to get involved with open source, contribute, learn and get paid for their work during the summer.

      We have a lot of ideas to choose from in our student projects page, ranging from virtual machine hacking, MacOS X improvements, MonoDevelop extensions, language bindings and even improving the Manos web application framework.

      Do not let our limited imagination stop you. Although there are plenty of ideas to choose from, students should feel free to come up with their own ideas. In the past years projects based on students' ideas have been very successful and we want to encourage more of those.

      Proposal submission is open until Friday April 8, so now is the time to join our wonderful community, discuss your project ideas and start working on those proposals.

      The Mono Summer of Code IRC channel is #monosoc on irc.gnome.org

      by Miguel de Icaza (miguel@gnome.org) at March 31, 2011 01:23 AM

      March 30, 2011

      Miguel de Icaza

      Monospace Conference: Boston, July 2011

      The Mono community is organizing the Monospace conference to be held in July in Boston. This event is being organized by Dale Ragan, Louis Salin and Paul Bowden.

      The organizers have just made a call for speakers.

      If you have an interesting technology that you would like to talk about during this 3-day event, you should submit a talk.

      Monospace is on a very aggressive schedule. The good news is that the entire Mono team will be participating in the event.

      Once the dates are set in stone, we will open registration. Currently we are thinking of hosting an event for some 200 attendees.

      by Miguel de Icaza (miguel@gnome.org) at March 30, 2011 01:05 AM