Python Programming, news on the Voidspace Python Projects and all things techie.

matplotlib and numpy for Python 2.7 on Mac OS X Lion

emoticon:nightmare Unfortunately, due to an API change, the latest released version of matplotlib is incompatible with libpng 1.5. Take a wild guess as to which version comes with Mac OS X Lion. :-/

Fortunately this is fixed in the matplotlib repository. Here's how I got matplotlib working on Mac OS X Lion (with Python 2.7 - but these instructions should work fine for other versions of Python too).

First matplotlib requires numpy. The latest version is 1.6.1, from here. The precompiled Mac OS X binaries are compiled to be compatible with Mac OS X 1.3 and up, which means they are 32 bit only. By default Python will run as 64 bit on OS X Lion, which means you'll see this when attempting to import numpy:

>>> import numpy
Traceback (most recent call last):
 ...
ImportError: dlopen(/.../site-packages/numpy/core/multiarray.so, 2): no suitable image found.  Did find:
        /.../site-packages/numpy/core/multiarray.so: no matching architecture in universal wrapper

You can get round this by launching python as a 32 bit process. I have the following alias in my .bash_profile:

alias py32="arch -i386 python"

The next problem is the matplotlib one. This blog entry shows how to build matplotlib from the git repo, using homebrew. I don't want to use a homebrew installed Python, so I modified the instructions to only install the dependencies with homebrew. I also set the correct flags to compile a 32bit version of matplotlib to match the 32bit numpy.

brew install pkg-config
brew install gfortran
cd matplotlib
export ARCHFLAGS="-arch i386"
py32 setup.py build
py32 setup.py install

And it appears to work. So far anyway:

>>> import pylab
>>> x = pylab.randn(10000)
>>> pylab.hist(x, 100)
>>> pylab.show()

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2011-09-05 00:18:13 | |

Categories: , , , Tags: , ,


Discover 0.3.2 and the load_tests protocol

emoticon:pill discover is a test discovery module for the standard library unittest test framework. Test discovery is built into unittest in Python 2.7 and 3.2. The discover module is a back-port of test discovery to work with Python 2.4 - 2.6 and Python 3.0 / 3.1 [1].

3.2 is a minor bugfix release. Test discovery also includes a new protocol called load_tests. In previous versions the standard tests would be passed in as a list instead of a TestSuite instance. This bug is now fixed both in untitest and in discover.

discover can be installed with pip or easy_install. After installing switch the current directory to the top level directory of your project and run:

python -m discover
python discover.py

This will discover all tests (with certain restrictions) from the current directory. The discover module has several options to control its behavior (full usage options are displayed with python -m discover -h). See the documentation on the PyPI homepage for details.

The load_tests protocol is interesting. It allows you to customize how tests are loaded from a module by defining a load_tests function. load_tests takes three arguments, the test loader, the standard set of tests for that module (allowing you to just add tests to the standard set if you want). The third argument is only used for load_tests functions in the __init__.py of test packages.

Here's an example of a test module with two test classes. The load_tests function returns a test suite that only uses one of the test classes:

import unittest


class FirstTest(unittest.TestCase):
    def testFoo(self):
        self.fail()

class SecondTest(unittest.TestCase):
    def testFoo(self):
        pass

def load_tests(loader, tests, _):
    return loader.loadTestsFromTestCase(SecondTest)

When the test module is loaded by discover, or unittest from Python 2.7 / 3.2, then load_tests will be called to create the test suite for the module.

[1]discover.py is only about 300 lines of Python. Supporting 2.x and 3.x in a single code-base is easy with small modules but I wouldn't recommend it for larger projects. Smile

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2010-02-07 23:25:47 | |

Categories: , , Tags: , ,


Django: Tear down and re-sync the database

emoticon:acrobat Django includes the useful management command syncdb for creating the database tables and columns used by your application. If you add new tables (model classes) then re-running syncdb will add them for you. Unfortunately if you modify columns of existing tables, or add new columns, then syncdb isn't man enough for the job.

For modifying the schema of production systems migrations are the way to go. I played a bit with South for Django, which is pretty straightforward. For a system still in development, and changing rapidly, migrations are overkill. We have a script for populating the database with test data, which we update as the schema evolves. (In parallel with this we have a script that imports the original data from the legacy application we are replacing - again updating the script as our app is capable of handling more of the original schema.)

For development what we really want to do is to tear down our development database and re-run syncdb. Running syncdb requires manual input, to create a superuser, so preferably we want to disable this so that the whole process can be automated. I found various recipes online to do this, but mostly using an obsolete technique to disable superuser creation.

In the end I used a combination of this recipe to programatically clear the databases (using the sql generated by sqlclear) and this recipe to disable super user creation.

Note

The code also skips clearing the authentication table as we are using Django authentication unmodified. Comment out the line that does this if you aren't using Django authentication or want to clear it anyway.

#!/usr/bin/env python

import os
import sys
import StringIO

import settings
from django.core.management import setup_environ, call_command
setup_environ(settings)

from django.db import connection
from django.db.models import get_apps, signals


app_labels = [app.__name__.split('.')[-2] for app in get_apps()]
# Skip clearing the users table
app_labels.remove('auth')

sys.stdout = buffer = StringIO.StringIO()
call_command('sqlclear', *app_labels)
sys.stdout = sys.__stdout__

queries = buffer.getvalue().split(';')[1:-2]

cursor = connection.cursor()
for query in queries:
    cursor.execute(query.strip())

from django.db.models import signals
from django.contrib.auth.management import create_superuser
from django.contrib.auth import models as auth_app

# Prevent interactive question about wanting a superuser created.

signals.post_syncdb.disconnect(
    create_superuser,
    sender=auth_app,
    dispatch_uid = "django.contrib.auth.management.create_superuser")
call_command('syncdb')

It wasn't all plain sailing. We're using MySQL (God help us) and our development machines are all running Mac OS X. On Mac OS X MySQL identifiers, including table names, are case insensitive. Whilst I would object strongly to a case sensitive programming language this actually makes working at the sql console slightly less annoying so it isn't a problem in itself.

We define our data model using standard Django model classes:

from django.db import models

class NewTableName(models.Model):
    NewColumnName = models.CharField(max_length=255, db_column="OriginalSpaltennamen")

     class Meta:
         db_table = 'UrsprunglichenTabellennamen'

The Meta.db_table specifies the table name that will actually be used in the database. We use the original table and column names where possible as the end users will have to modify some existing tools to work with the new system and this minimizes the changes. As you can see both the original table and new table names are mixed case.

For some reason, which I never got to the bottom of, where the model classes have foreign key relationships syncdb will create these tables with all lowercase names. This could be Django, MySQL or the Python connector to MySQL (or any combination of these) and I never worked out why.

Unfortunately sqlclear will only generate sql to drop tables where the casing specified in the model exactly matches the casing in the database. I worked round it by changing all our Meta.db_table entries to be all lowercase. Not what you would call ideal but acceptable.

Now everytime we update our database schema we can simply run this script. It drops all existing tables and then re-creates them with all the changes.

Note

Carl Meyer suggests using call_command('syncdb', interactive=False) instead of the signals.post_syncdb.disconnect code. It's certainly shorter but I haven't tried it yet.

In the comments Stavros Korokithakis points out that the reset admin command will reset individual apps and regenerate them. If you have several apps in a project this script is still simpler, but if you only need to reset one then you might as well just use ./manage.py reset <appname>. It takes the --no-input switch if you want to supress the user prompts.

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2009-12-27 00:06:13 | |

Categories: , , Tags: , , ,


IronPython Tools and IDEs

emoticon:contour A frequent question on the IronPython mailing list is "what IDE should I use with IronPython?". For many .NET developers the question is phrased slightly differently, "how do I use IronPython in Visual Studio?". There are now several different major IDEs with IronPython support, including a few different ways of using IronPython with Visual Studio.

I've written a roundup of the major editors and how they support IronPython. This includes a look at the standard features you would expect in a Python editor, like autocomplete, calltips, debugging and more - with honourable mentions for other Python editors like Vim, Emacs, Komodo, Davy's IronPython Editor and the DLR editor that comes with the Pyjamas project. The article also has a roundup of standard tools for Python development; the code quality tools (PyLint, PyChecker and PyFlakes), profilers, debuggers, coverage, refactoring and so on.

Article contents:

  • Introduction

  • IronPython Studio
    • Debugging

    • Visual Studio
      • Visual Studio SDK Experimental Hive
    • Summary

  • SharpDevelop
    • Debugging
    • Summary
  • Wing IDE
    • Summary
  • Eclipse and PyDev
    • Summary
  • Other Editors

  • Other tools
    • Windbg SOS
    • Code Quality Checkers
    • Debugging and the Python Pdb Module
    • Code Coverage and Profiling
    • Refactoring and Complexity

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2009-08-31 22:09:19 | |

Categories: , , , Tags: , , , , ,


Movable IDLE for Python 2.5 on Windows

emoticon:pen_book Movable IDLE is a standalone version of the IDLE Python IDE. Movable IDLE is part of the Movable Python project and can be run (Windows only I'm afraid) from a USB memory stick and without installing Python. It comes with the full Python standard library.

Movable IDLE for Python 2.5

I've built a new version. The only differences from the previous release are that it is now built with Python 2.5 and no longer displays an annoying dialog on load:

It's a while since I've built a distribution using the Movable Python codebase. It seems fine, but feel free to report any problems you encounter or bugs you find. Smile

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2009-06-24 21:38:27 | |

Categories: , , Tags: , , , ,


HOWTO: Using the Wing Python IDE with IronPython

emoticon:eggs A common question amongst new IronPython users is Which IDE is best for IronPython?

One common suggestion, which .NET developers gravitate towards naturally, is IronPython Studio. This is an example of extending Visual Studio through the VSx shell, and in my opinion not really suitable for pain-free use. You can read some of my thoughts on it in this IronPython-URLs blog entry.

As IronPython code is just Python code any good Python IDE will do, however they rarely feature good integration with IronPython. Fortunately there are a large range of tools that can be plugged into any extensible editor.

My favourite IDE is the Wing IDE, not least because of it has the best autocomplete (intellisense) of any Python IDE I've used. It achieves this by statically analysing Python code and inferring the types. This doesn't work with .NET types because they don't have Python source code... This HOWTO shows 'how to' enable autocomplete for the .NET types in Wing, plus using the scripting API to add commands like executing the current file with IronPython:

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2009-05-17 17:13:56 | |

Categories: , , , , Tags: , ,


More Changes to Mock: Mocking Magic Methods

emoticon:acrobat I recently released Mock 0.4.0, my test support library for mocking objects and monkeypatching. It's been gratifying and surprising to get so many emails from people using it.

I originally wrote Mock to simplify some of the testing patterns we use at Resolver Systems, and we use a forked version [1] there. Mock has greatly improved the readability of our tests whilst reducing the number of lines of code needed. Some improvements made at Resolver Systems (like assert_called_with and the single argument form of patch) have fed back into the public version.

The obvious lack in Mock as it stands at 0.4.0 is its inability to mock magic methods. This means that you can't use it to mock out any of the built-in container classes, or classes that implement protocol methods like __getitem__ and __setitem__. The main reason that it didn't support them was the lack of a clear and obvious way to do it. At Resolver Systems we've bolted on support for a few of the magic methods as we've needed them; unfortunately in an ad-hoc and inconsistent manner.

My requirements for protocol support in Mock (which I think I've now met) were:

  • A clean, simple and consistent API
  • Being able to meet most use cases without having to meet all of them
  • Not unconditionally adding a large number of new attributes to mock objects [2]
  • Mocks shouldn't support all the protocol methods by default as this can break duck-typing
  • It mustn't place any additional burden on developers not using them

The solution I've now come up with is implemented in the SVN repository, and will become Mock 0.5.0. So far it is only capable of mocking containers and I'd like feedback as to whether this is going to meet people's use cases. If you are interested in this, please try the latest version and let me know what you think:

Documentation for the new features is not done, but it is all tested so you can check 'mocktest.py' and 'patchtest.py' for examples of how they work.

The implementation uses a new class factory called MakeMock. This takes a list of strings specifying the magic methods (without the double underscores for brevity!) you want your mock objects to have - it returns a subclass of Mock that only has the magic methods you asked for.

For the container methods, the Mock class takes a keyword argument items that can either be a dictionary or a sequence. This is stored as the _items attribute (that only exists on mocks with container methods), defaulting to an empty dictionary, and can be any mapping or sequence object. The container methods delegate to this, and all calls are recorded normally in method_calls.

>>> from mock import MakeMock
>>> MagicMock = MakeMock('getitem setitem'.split())
>>> mock = MagicMock(items={'a': 1, 'b': 2})
>>> mock['a']
1
>>> mock.method_calls
[('__getitem__', ('a',), {})]
>>> mock['c'] = 10
>>> mock._items
{'a': 1, 'c': 10, 'b': 2}

There is an additional bit of magic. When you instantiate a mock object normally (using Mock(...)), you can use the 'magics' or 'spec' keyword arguments to actually get back an instance with magic methods support. The spec keyword argument takes a class that you want your Mock to imitate, and accessing methods not on the actual class will raise an AttributeError. When you instantiate a mock object with a spec keyword argument, the constructor will check if the spec class has any supported magic methods; if it does you will actually get an instance of a mock that has these magic methods. The magics keyword argument is new, and lets you specify which magic methods you want:

>>> from mock import Mock
>>> mock = Mock(magics='getitem contains')
>>> 'hi' in mock
False
>>> mock['hi'] = 'world'
Traceback (most recent call last):
  ...
TypeError: 'MagicMock' object does not support item assignment
>>> mock._items['hi'] = 'world'
>>> 'hi' in mock
True
>>> type(mock)
<class 'mock.MagicMock'>
>>> mock.method_calls
[('__contains__', ('hi',), {}), ('__contains__', ('hi',), {})]

Note that the magics keyword takes a string and does the split for you. Razz

As I implement more magic methods I'll provide shortcuts for obtaining instances / classes that have all the container methods, or all the numeric methods, or just everything.

The patch decorator also now take spec and magics keyword argument, but that's not as useful as it sounds. You will usually be using patch to mock out a class, so you'll still need to set the return value to be a mock instance with the methods you want.

For comparison methods I'll allow you to provide a 'value' object that all comparisons delegate to. This can also be used for hashing, in-place, right hand side and unary operations. That doesn't leave much left to cover (descriptors and context management protocol methods - but I'm not sure how much demand there will be for mocking these.)

Protocol methods supported currently are:

  • __getitem__
  • __setitem__
  • __delitem__
  • __iter__
  • __len__
  • __contains__
  • __nonzero__

The list of changes that are currently in Mock 0.5.0:

  • Mock has a new 'magics' keyword arguments - a list (or string separated by whitespace) of magic methods that the Mock instance should provide (only container methods available so far).

  • Mock has an 'items' keyword argument for mocks implementing container methods.

  • The methods keyword argument to Mock has been removed and merged with spec. The spec argument can now be a list of methods or an object to take the spec from.

  • patch and patch_object now take magics and spec keyword arguments

  • TODO: verify - Nested patches may now be applied in a different order (created mocks

    passed in the opposite order). This is actually a bugfix.

  • MakeMock is a Mock class factory. It takes a list of magic methods and creates a MagicMock class (Mock subclass) with the magic methods you specified. Currently only container methods are available.

  • Instantiating a Mock with magics / spec will actually create a MagicMock with magic methods from the method / spec list.

There are still a few things left to do (apart from implementing support for all the other magic methods) and some open questions:

  • Should MakeMock accept a string argument (and do the split for you)?
  • How should reset affect '_items'? It should probably take a copy of the items at instantiation and restore it when reset.
  • Should a failed indexing attempt still be added to 'method_calls'? (Currently not - it just raises the usual exception.)
  • Should attributes (children) on mocks created from MakeMock be plain 'Mock' or from the same class as their parent? (currently they are plain mocks)
  • Parent method calls if magic methods called on a child? (not currently possible as all children will be mocks rather than magic mocks)

The first two of these I will probably decide one way or the other before a 0.5.0 release. The others I may just leave for a while and see how it gets used.

[1]Forked because we use a kind of half-breed .NET API naming convention, and also to maintain API compatibility with the Listener class that it replaces - and that is still in use in our older tests.
[2]Every attribute that Mock uses internally is an attribute that it is no longer capable of mocking for the programmer. If Mock uses a '.value' attribute then it is no good for mocking a class that has a '.value' attribute as they will conflict.

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2008-11-05 15:13:20 | |

Categories: , , Tags: , , ,


Doctest, How I Loathe Thee

emoticon:avocado Andrew Bennets has written a series of blog entries on why doctest makes poor unittests. I agree with a lot of what he has to say, and this was brought home to me again when working on ConfigObj which is tested with doctest.

Note

doctest is a Python testing tool. It executes interactive code sessions (typically cut and pasted from an interactive interpreter) embedded in documentation or docstrings. It produces failure messages if the actual output from executing the embedded code differs from what is specified in the source.

It is a great tool for checking that examples in documentation / docstrings still work; but in my opinion it makes a poor unit testing tool. Thankfully unittest is great at this.

Some of my pet peeves Andrew covered. Particularly that because doctest is comparing the output of your code to a text source you can't output arbitrarily ordered data like dictionaries. Instead you have to compare your data to known good data - and the result of this is either True or False. If it is False then that's all you get, you don't get to see what your data actually was - not helpful as a diagnostic tool.

Even worse, try adding prints into relevant parts of your code for diagnostics. The prints mean extraneous output - so all your tests start failing... It's not as if you can copy your test case into a separate file either, every line starts with '>>>' or '...'.

Another nit, that could probably be fixed, is that when a line in a test fails execution continues. Normally this means a cascade of failures and you have to dig through the output to find which is the real failure.

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2008-10-28 12:39:16 | |

Categories: , Tags: , ,


Diagramming on the Mac

emoticon:python One of the annoying things about writing a book is having to create my own diagrams. This was something I wasn't expecting when I started the project, I'm a good writer but awful at producing diagrams.

Thankfully a colleague, Jonathan Hartley, stepped up and helped me.

Here's one of my original diagrams, a 'hedgehog diagram' I produced with 'Paint' (I was still running Windows at home at the time - later I upgraded to Paint.NET which is a much better program but didn't improve my skills):

A hedgehog diagram of basic function syntax in Python

Here is Jonathan's rendering of the same diagram:

A better diagram of basic function syntax in Python

To produce them, he used Open Office Draw. I'm now working on chapter 15 (Embedding IronPython in C# and VB.NET using the DLR Hosting API), and thought I'd give it a try myself.

I used NeoOffice, which is a Mac port of Open Office, and it looks very good. I did try Inkscape, even upgrading my X11 install to the latest version of XQuartz, but it just refuses to run.

Here's, my first attempt:

The DLR hosting API for embedding the IronPython Engine

It's certainly better than my earlier attempts, but I think it still needs some magic from Jonathan.

Several people also recommended OmniGraffle, which looks good, but is not cheap and isn't cross-platform. Given my skill level I think OO offers me everything I need.

Whilst we're on the subject of Mac software, I've also been using a few new programs recently.

  • Pixelmator

    Having created the diagram in Neo Office, I used Pixelmator to edit the Tiff graphics file. I think I got Pixelmator included with one of the recent MacHeist bundles. It seems like a very capable program for basic image editing.

  • MPlayer

    Yet another Open Source video player. I've been trying to play some high quality mkv (Matroska) files encoded with H.264. Neither Quicktime nor VLC (usually excellent) could play it. MPlayer isn't as polished as VLC, but plays them fine.

  • Xee

    Nice little program for image viewing. Much nicer than Preview (which is part of Mac OS X and great for PDFs).

  • Cornerstone

    A shiny commercial Subversion front end. I'm trying out the demo version. It seems great so far. I also tried Versions (also in Beta), but it doesn't let you work with existing working directories (you have to checkout through the Versions UI) - so I didn't get very far.

  • Transmit

    Nice FTP, SFTP (etc) client for the Mac. Again, commercial but worth it. I couldn't find another client that had a '2-pane' UI, except for FileZilla which just refuses to work on my computer. It dies with an odd error [1] that few other people seem to have, and the fixes suggested for them doesn't work for me. Sad

  • Octave Engine Casual

    A new and very funky physics engine from a Japanese developer. Absolutely pointless, but very fun - and very slick on the Mac (and Windows).

  • Chmox

    A CHM reader. The CHM (Compiled Help Manual) format for documentation is popular on Windows, and with reason as if well done it can make for very usable docs. Chmox hasn't been updated for a while, but seems to work fine.

[1]fzsftp could not be started. fzsftp is in the Filezilla package and I tried setting the suggested environment variable. Oh well.

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2008-07-06 17:25:21 | |

Categories: , , , Tags: , , , , ,


Remote Pairing with Copilot and Skype

emoticon:pill Today I was due in at work, but engineering works between Northampton and London meant that there were no trains (which I didn't discover until I got to the station). I would have quite happily stayed at home and spiked, but Glenn was actually in the office and we couldn't think of enough for two of us to spike. At Resolver we practise Pair Programming, and all production code has to be paired on, so we decided to experiment with remote pairing.

We used Skype for the audio, which was straightforward.

We considered using a collaborative editor, like Gobby. We decided in the end to try screen sharing as it would be more flexible. Additionally, I didn't have a full build environment and copy of the subversion tree we were working on (and a collaborative editor would probably require me to have the files being edited).

I discovered this Jon Udell blog entry on screen sharing tools, but in the end we decided to give Joel Spolsky's Copilot a try.

It is based on VNC and is really easy to use. A day pass costs $5 [1] (I paid with Paypal) and then both 'sides' download a 736kb client and run it. That's it, no messing around with IP addresses or configuring routers and firewalls, just run the client and you are sharing a screen (the session id is encoded into the client you download - very clever).

Both Resolver and I have a pretty good internet connection, and the screen sharing was very good. There was a bit of a lag, but less than the VNC clients I've used - even when the connection is only across an intranet [2].

It was a great experience. The audio connection was seamless and it really felt like 'pairing'. We worked together on the problem, and were both able to 'drive' (control the keyboard). We even completed the user story!

As there is a Mac client (no Linux client I'm afraid) I could pair program Resolver without having to use Windows! Obviously a non-propietary solution would be even better than copilot, but I can't see one being as easy to use.

I really enjoy working at Resolver, but I can't see myself doing the two hour commute (each way) indefinitely. If I could work from home this would be my dream job. Smile

On a totally different note, I've received my Moo Cards (and they're great) and an Eee PC with 8GB flash drive and 1GB memory [3]. I've also been watching The Muppet Show and The Cosby Show, both of which were childhood favourites and still hilarious - as you would know if you were following me on Twitter. Wink

[1]They also have a subscription model but the rates aren't very good compared to a 24hour pass - even just for eight or nine hours straight use.
[2]I later tried it with Andrzej with him using the Mac client. As his home network connection isn't so good the lag was more noticeable but still acceptable.
[3]I know. I cancelled a previous order, but My Boss got one and it looks really nice. The lure of the bigger drive and memory was too much for me.

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2007-12-27 22:55:07 | |

Categories: , ,


IronPython Studio: Problems, Victory and Compiled Binaries

emoticon:drive After much pain I have finally got IronPython Studio working, fortunately it was worth the effort.

IronPython Studio is an IDE based on the Visual Studio Shell. Visual Studio 2008 allows you to create (and freely distribute) applications built on top of the Visual Studio shell, and IronPython Studio is an 'example' of this - so the source is available to play with if you have the full version of Visual Studio 2008. Whilst I haven't abandoned what I consider to be more featureful Python editors like Wingware's Wing IDE, IronPython studio does have a few tricks up its sleeve that makes it useful for IronPython development.

The WPF Designer in IronPython Studio

The first trick, which I didn't appreciate until I got IronPython Studio working properly, is that it produces compiled binaries from IronPython projects. It comes with both WPF and Windows Forms designers which generate Python code. You can either use that code directly [1] or compile the project into an executable.

Last but not least, IronPython Studio has debugger support - including setting break points in IronPython code. As conventional (!?) Python debuggers don't work with IronPython this could come in useful (despite my general antipathy towards debuggers there are times when they are invaluable).

So why did I have such problems getting IronPython Studio working? Partly poor instructions and partly down to the fact that I have had several versions of Visual Studio installed on this XP box (ranging from Visual Studio 2003, through several versions of 2005 and 2005 express up to 2008 betas and 2008 express).

Just in case anyone else has difficulties, here are the problems I had and how I overcame them. (Thanks to the guys - especially corysama - over on the IronPythonStudio forum).

First of all the installation instructions aren't massively clear. The steps you need are:

  1. Download and install the Visual Studio X redistributable from: Visual Studio Extensibility
  2. After you run the Install for the MS VS 2008 Shell Isolated Mode Redistributable, you must then go to the folder ("C:VS 2008 Shell RedistIsolated Mode") and click on: "vsshellisolated_enu.exe" to actually install the redistributable runtime.
  3. Install IronPython Studio

After doing this I found that attempting to build a project gave me an "Object reference not set to an instance of an object" error (a particularly frustrating and hard to track down exception whenever we have encountered it in Resolver).

The readme that comes with IronPython Studio (which of course I didn't read) actually explains this. You need to register the IronPython CodeDom provider for the compilation to work (as explained by Aaron Marten on the VSX team).

Unfortunately for Windows XP these instructions are just plain wrong (being charitable I assume they are for Vista and they have just forgotten that most people still want to run XP!). The instructions tell you insert the new XML "Under the root <configuration> node" (in machine.config). Instead you should place it between the <system.web> and <system.serviceModel> nodes. The config XML will look like this:

    </providers>
  </roleManager>
</system.web>
<system.codedom>
  <compilers>
    <compiler language="py;IronPython" ...>
  </compilers>
</system.codedom>
<system.serviceModel>
  <extensions>

By now I had completely screwed my system and even attempting to use the WPF designer gave me the following error:

The Microsoft.VisualStudio.Internal.WPFFlavor.WPFPackage, Microsoft.VisualStudio.WPFFlavor,
Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
({B3BAE735-386C-4030-8329-EF48EEDA4036}) did not load because of previous errors.
For assistance, contact the package vendor. To attempt to
load this package again, type 'appenv /resetskippkgs' at the command prompt.

Needless to say appenv doesn't exist, but with a combination of clues and guesswork I figured out that it was referring to the studio executable. Running IronPythonStudio.exe /resetskippkgs restored the designer.

After this I was now getting a really weird error:

Could not load file or assembly 'IronPython, Version=1.1.0.0...
or one of its dependencies. The located assembly's manifest
definition does not match the assembly reference.
(Exception from HRESULT: 0x80131040)

Now it is failing to load IronPython at all when I attempt to build! It turns out that it has located the IronPython assemblies that were installed along with the Visual Studio 2005 SDK - which were an older version! I haven't actually solved this problem (it must be picking up the location from some environment variable or registry entry set by the VS 2005 SDK), but manually copying the right assemblies into the build directory fixes it and I can design and debug to my heart's content. phew Smile

[1]Although if you aren't compiling then the code will need a bit of tweaking - for example you will have to add calls to clr.AddReference to manually add references to the assemblies you are using.

Like this post? Digg it or Del.icio.us it.

Posted by Fuzzyman on 2007-12-20 15:11:03 | |

Categories: , , ,


Hosted by Webfaction

Counter...