March 27th, 2014 | Tags:

I’m a Java developer by trade and I wanted to try my hand at learning something new so I picked NodeJS to start with. I have never done too much work with javascript so it felt a little awkward at start especially the fact that NodeJS is not a statically typed language like Java but I finally managed to start writing an application with it.

The first problem that I ran into was that of printing the raw body of a POST request because I needed to understand why my code which worked on the raw body wasn’t working. I was using express as my middleware. Before someone tells me that I shouldn’t have cared about the raw request body anyway and that I could have just had NodeJS automatically parse my incoming JSON, I just want to say I wasn’t using JSON but I should have.

Oddly, no matter what I tried the request body was always empty. I saw that others on stackoverflow had the same problem. The answer was right there

On the server side, Express uses connect’s bodyParser which by default only supports the following content types:
application/json
application/x-www-form-urlencoded
multipart/form-data

And I was indeed using the express body parser but removing it had no effect. Digging further I realised that I basically needed the plain body and this answer on stackoverflow worked like a charm. The code snippet is

app.use(function(req, res, next){
  if (req.is('text/*')) {
    req.text = '';
    req.setEncoding('utf8');
    req.on('data', function(chunk){ req.text += chunk });
    req.on('end', next);
  } else {
    next();
  }
});

The POST body is saved as req.text in the express request and can be used directly. Once again, stackoverflow to the rescue!

Comments Off (1,096 views)
March 27th, 2014 | Tags:

Heroku is a great platform for running your applications. But heroku has a small drawback, that you can only browse the last 1500 lines of your logs. This is fine as long as you’re in development but when you have alpha users using your application you can’t really afford to not have logs. Heroku has obviously thought about this and they have a concept of drains which means that you can easily save your logs elsewhere for later use. However I decided to do this differently.

Enter logstash. I have been wanting to play with logstash for a while and this looked like the perfect opportunity. Simply put, logstash manages your logs for you. This includes storage and preprocessing which is absolutely great.

Before you start, you will need a server of your own, a small VPS will do. I use one from digital ocean (referral code warning! :)) because I just need for basic usage, nothing fancy and $5 is a great price to pay for a month of VPS.

You will then need to install logstash. There used to be a flatjar (or monolithic jar as it was earlier called) to run logstash but the developers have now decided to package scripts to run logstash. To get and install logstash just follow the Getting started with logstash tutorial.

I will assume you have read through the entire tutorial above and know what inputs/filters/outputs are.

Next you need to install the heroku toolbelt on your server. This is also a fairly straightforward process. (Don’t worry about using a ubuntu script for debian installation, it works fine). Verify that your heroku toolbelt is working correctly by logging in and listing your apps. Make sure you install the toolbelt under the same user that you plan to run logstash under.

The next step is to get the logs from heroku to logstash. You could do this by using a syslog drain from heroku to your logstash instance but I found that to be harder to set up because I am not a sysadmin and only a basic linux user. Instead I decided to use the very useful heroku input plugin that comes with the contrib package of logstash.

The logstash config file below has a few limitations:

  1. it reads from one application logs only
  2. it only saves router logs
  3. it saves logs to an embedded instance of elasticsearch

Save the following in a logstash-heroku.conf file on your server:

input {
        heroku {
                app => "project-light-web"
        }
}
filter {
        grok {
                pattern => "^%{TIMESTAMP_ISO8601:timestamp} %{WORD:component}\[router]:.*method=%{WORD:method} path=%{URIPATHPARAM:path} .*fwd=\"%{IP:userip}\" .* connect=%{NUMBER:connect}ms service=%{NUMBER:service}ms status=%{NUMBER:status} bytes=%{NUMBER:bytes}$"
        }
        date { match => [ "logdate", "dd/MMM/YYYY:HH:mm:ss Z" ] }
}
output {
  stdout { }
  elasticsearch { embedded => true }
}

Restart your logstash server and also pass your config file so that logstash reads from it.

./logstash -f logstash-heroku.com -l logstash.log agent

The -l parameter writes internal logstash logs to a file which is useful for debugging. Now, whenever your application receives a HTTP request, logstash will receive it too and it will be printed on the standard output.

Remember, these logs are only persisted in the embedded elasticsearch instance packaged with logstash and you need a seperate elasticsearch instance running on the same box so that logstash can persist them there.

Installing elasticsearch is fairly straight forward. You will need to tell logstash that you want to use a elasticsearch instance that is running on localhost. To do this, in your logstash-heroku.conf file change the output section to

output {
  elasticsearch { host => localhost }
}

That’s it, you’re done. You can now check if the logs are being saved in your elasticsearch instance by going to http://:9200/_search?pretty

Comments Off (555 views)
October 14th, 2013 | Tags:

I was trying to install RobotFramework library robotframework-httplibrary today and pip suddenly decided to not work. I kept getting the same error again and again.


Traceback (most recent call last):
File "/usr/local/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2603, in
working_set.require(__requires__)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 666, in require
needed = self.resolve(parse_requirements(requirements))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: pip==1.1

I google’d and found a stackoverflow answer that mentioned that the OSX 10.8 upgrade could be the issue and I remembered that I upgraded about a month back. The fix was fairly simple


sudo easy_install -U pip

And then

sudo pip install --upgrade robotframework-httplibrary

Remember, make sure you have the latest XCode and its command line tools installed otherwise pip may continue to mess up.

Comments Off (392 views)
October 12th, 2013 | Tags:

I recently joined my new job and got a shiny new MacBook Pro to work on. Needless to say I love everything about this system especially the SSD which makes everything a blaze. I hadn;t used a mac before this so I had a bit of trouble getting used to the controls etc but became comfortable with them in a couple of weeks.

To cut a long story short, at my new workplace we use svn for source code management. I needed to install svn and the easiest way that I could find was to use macports do it. But there was a minor (not) problem. Macports has moved on to svn 1.8 and our repositioeis are still on 1.7 which is not that bad (better than being on 1.6 atleast). However downgrading svn from 1.8 to 1.7 on macports was much harder than I thought it would be.

Out of the box, macports is fantastic, it installs everything with ease and requires very little user intervention which is great for a mac novice like me. But downgrading anything is a bit of a challenge. I eventually found a blog which listed this process quite simply.

Basically, you need to checkout the macports version of subversion for 1.7.

svn co http://svn.macports.org/repository/macports/trunk/dports/devel/subversion --revision 108493

Go into your check out and

sudo port install

This will install subversion 1.7 but there is a good chance that macports is still using 1.8 instead of 1.7. To confirm


bash-3.2$ sudo port installed subversion
The following ports are currently installed:
subversion @1.7.10_1 (active)
subversion @1.8.3_2
subversion @1.8.3_2+universal
subversion @1.8.3_3
subversion @1.8.3_3+no_bdb+universal

If you see that 1.7 is active then you can skip the activation step. If it is not active then you need to activate it explicitly.

sudo port activate subversion @1.7.10_1

After this your svn should work but if it doesn’t and you get an error

Library not loaded: /opt/local/lib/libserf-1.0.dylib

Then you need to downgrade your serf1 library version as well. The process is the same as subversion:

svn co http://svn.macports.org/repository/macports/trunk/dports/www/serf1 –-revision 108607

Go into your check out and

sudo port install

And make sure serf1 version 1.2.* is activated by executing

sudo port activate serf1 @1.2.1_1

Your svn is now downgraded to 1.7.

Bonus: If you see javahl errors in eclipse then you must downgrade your javahl bindings as well.

svn co http://svn.macports.org/repository/macports/trunk/dports/devel/subversion-javahlbindings –-revision 106629

and then activate it

sudo port activate subversion-javahlbindings @1.7.10_0

Comments Off (662 views)
October 12th, 2013 | Tags:

After struggling with the Eclipse STS plugin for Kepler I finally decided to give in and install Spring tool suite. I don’t really understand what went wrong with the installation but maven went crazy after the install and kept giving me a "Updating Maven Project". Unsupported IClasspathEntry kind=4
over and over again which got annoying really quickly. Despite many attempts are re-enabling the maven nature on the project the error wouldn’t go away. I eventually ditched Eclipse completely because I really need Spring support as most of my work is based around Spring.

The STS install was a breeze and the green colors illuminated my Mac’s screen. I was pleasantly surprised. I immediately installed Subclipse, my favorite svn provider and went about adding my repositories to STS. I right clicked on the trunk and no option to checkout the project as a maven projects. Hmm, thats odd. Google’d around a bit and realized I was missing Maven integration for Subclipse. I added their update site to my STS install and voila, everything works.

Quite pleased with STS so far.

Comments Off (1,879 views)
October 8th, 2013 | Tags:

I just bought a new Binatone 4210 from flipkart and needless to say I was both disappointed and surprised. The unit itself is flimsy and the rechargable batteries that came with it barely work.

But the biggest problem was that it came with no service manual so I had to spend some time figuring out how to pair the two. After googling for a while I couldn’t find anything specific to 4210 but then I had a hunch.

So, to pair the handset to the base station, make sure the handset says, “Searching” and from the base station options go into “Registration”.

That’s it, you’re done.

Comments Off (655 views)
April 23rd, 2013 | Tags:

A while back I came across a very useful service that I thought could be used for a lot of things. The service that I am talking about is PushBullet developed by redittor /u/guzba. I decided to try and write a small android app that would allow me to share content across devices using this service.

PushBullet has an API that was developed recently and it seemed to work well with

curl https://www.pushbullet.com/api/devices -u API_KEY:

I basically wanted to execute the same thing on my android device. As any Android developer will tell you making HTTP calls from Android isn’t exactly the easiest thing in the world, you have to wrap your call in an AsyncTask and then use the Handler mechanism to update the UI when the call is complete. Luckily, James Smith has wrapped all this up, very neatly, in a android-async-http library that is a breeze to integrate into your project.

The PushBullet API uses Basic HTTP authentication as a security mechanism. This means we must pas our API key as the username to the PushBullet API when we request devices from it. This all sounds fairly straight forward and the code to call the devices page using the android-async-http library looks like this.

AsyncHttpClient client = new AsyncHttpClient();
client.setBasicAuth("API_KEY", "",
				new AuthScope("pushbullet.com", 443, null));
client.get("https://www.pushbullet.com/api/devices", new AsyncHttpResponseHandler())

This should all have worked but it doesn’t. Looking at the logs, I always saw the same error

04-23 08:52:12.845: W/DefaultRequestDirector(21976): Authentication error: Unable to respond to any of these challenges: {}

Googling around, it seemed that the error was that the server did not tell the client that it used basic HTTP auth but simply said we were “Unauthorized” to view this webpage. A proper configuration of the server would have resulted in our client receiving a list of challenges to reply with and would have sent the basic auth to the server. The HTTP client library does not send the basic auth credentials in the first request to the server but only when the server says that it needs these credentials. The alternative now is to send the basic auth to the server with our initial request.

To do that however, we must first encode our username and password.

String encoded = new String(Base64.encode("API_KEY:".getBytes(), Base64.NO_WRAP));

And then pass this encoded string in the “Authorization” header of our request.

client.addHeader("Authorization", "Basic " + encoded);

That’s it, because the Basic auth credentials are not sent to the server with the initial request the server replies back with proper JSON data.

I spent a good 6 hours trying to figure this out, I hope others don’t have too.

Comments Off (2,838 views)
February 27th, 2013 | Tags:

At work I have been using robotframework to write acceptance tests. And while I cannot say that I absolutely love it, I can see how the framework is useful despite its somewhat minor shortcomings. I have used FitNesse in the past and it’s a little difficult to pick a favourite at this point, they both seem to have their pros and cons although I must say that FitNesse with its built in HTML editor is definitely has an edge on RF.

Anyway, I was writing test cases and I came to the point where I needed an if statement that AND’d a few variables and then executed logic. The statement itself was very simple

Keyword Argument 1 Argument 2
Run Keyword If '${field1}' == 'FIELD1' && '${field2}' == 'FIELD2' My Keyword

I couldn’t find any way to do this in the robot framework and the documentation didn’t help so I went hunting. An informative post from the RF mailing list told me that RF supports all logical operators that are in the python language. I don’t know python so I had to go looking.

Python supports logical operators just like any other language and these can be directly used in RF. So my AND/OR conditions in RF are written as

Keyword Argument 1 Argument 2
Run Keyword If '${field1}' == 'FIELD1' and '${field2}' == 'FIELD2' My Keyword
Run Keyword If '${field1}' == 'FIELD1' or '${field2}' == 'FIELD2' My Keyword

As simple as that.

Comments Off (2,375 views)
February 27th, 2013 | Tags:

I can no longer deal with the amount of spam posted in comments. I have, therefore, disabled all comment posting on my blog.

Spammers, you win.

Comments Off (1,175 views)
February 28th, 2012 | Tags: , , ,

Ahh, the joys of JIRA. It’s hard to argue that JIRA is not the best bug management system out there when you see that they support a host of external access protocols. SOAP is (understandably) the preferred method of access and there is a nice example of its use on the Atlassian wiki.

I wanted to see if I could access JIRA with Groovy also and perhaps present the data in a pretty looking desktop app built using griffon. I couldn’t find any obvious examples but then I stumbled across the JIRA SOAP lib.

The JIRA access itself is pretty straight forward, you just define a URL to JIRA and the JiraSoapServiceServiceLocator of the SOAP lib does the rest, pretty nifty.

def server = 'http://jira.iontrading.com'
def url = "${server}/rpc/soap/jirasoapservice-v2"

def serviceLocator = new JiraSoapServiceServiceLocator()
serviceLocator.setJirasoapserviceV2EndpointAddress(url)
serviceLocator.setMaintainSession(true)

We must first generate a login token which is sent with every SOAP request to the server. This is your ‘cookie’.

def service = serviceLocator.getJirasoapserviceV2()
model.loginToken = service.login(model.username, model.password)
println "Login token - " + model.loginToken

Ignore the model prefix, I save the username and password in the griffon model.

Once you’re logged in you can print server info.

def info = service.getServerInfo(model.loginToken)
println "JIRA version: ${info.getVersion()}"

And also execute JQL queries.

def issues = service.getIssuesFromJqlSearch(model.loginToken, model.query,20)

“issues” above is a RemoteIssue array returned. You can all attributes of the issues from this query.

In the sample application here I group the issues by a custom field and display them in a JXTable. This custom field is out Greenhopper story point field.

The fun part of writing all this code is that at the end you know what you’re going to get is not same half baked solution but a fully functional GUI.

The useful part of writing all this is that the application is easily extensible, you just need to add a new tab and you can group your issues according to any criteria you want. Oh, how I enjoy griffon.

Comments Off (2,507 views)
February 16th, 2012 | Tags: ,

Ever since I started using github for hosting I have started to dislike SVN. Don’t get me wrong, its still very useful and easy to understand but as a developer I don’t want something that is easy to understand (infact I won’t be able to use such a tool) but I want something that works well. Today I found a little gem of a tip hidden on SO related to SVN.

One of our new builds was setup yesterday and it just refused to run giving an error saying that the shell script it was trying to execute did not correct permissions. Hmm, odd, this works on windows. (Ofcourse it does, you don’t execute .sh scripts on windows! Duh!) It was obvious that the scripts permissions needed to be changed with `chmod +x script.sh`.

I did that and ran a `svn diff`. No changes. Hmm, that’s odd, I just changed a file, shouldn’t it see this as a change and show me a diff… Apparently not. Subversion does not store file permissions so then how can I make it checkout the correct permissions for scripts that need them?

I thought about this for a few minutes and was about to give up and just run chmod via ant builder in a groovy script when I thought to look this up on SO. The same question has been asked before. And it has just the answer I was looking for, though buried under a few other no-so-useful answers.

For the lazy ones, you need to set a property “svn:executable” on the file where you want to set files to be executable. Subversion then maintains this bit and passes this onto the operating system. Not as flexible as just maintaining the original permissions but it works. Here is the explanation of the property from the svnbook:

svn:executable

The svn:executable property is used to control a versioned file’s filesystem-level execute permission bit in a semi-automated way. This property has no defined values—its mere presence indicates a desire that the execute permission bit be kept enabled by Subversion. Removing this property will restore full control of the execute bit back to the operating system.

On many operating systems, the ability to execute a file as a command is governed by the presence of an execute permission bit. This bit usually defaults to being disabled, and must be explicitly enabled by the user for each file that needs it. In a working copy, new files are being created all the time as new versions of existing files are received during an update. This means that you might enable the execute bit on a file, then update your working copy, and if that file was changed as part of the update, its execute bit might get disabled. So, Subversion provides the svn:executable property as a way to keep the execute bit enabled.

This property has no effect on filesystems that have no concept of an executable permission bit, such as FAT32 and NTFS. [28] Also, although it has no defined values, Subversion will force its value to * when setting this property. Finally, this property is valid only on files, not on directories.

Comments Off (2,868 views)
February 14th, 2012 | Tags: , , , ,

Parsing HTML documents is never easy. Some languages have better support for such tasks than others. I thought Groovy wasn’t one of them but I was wrong. I had to parse a HTML document that wasn’t always well-formed and that made the task harder. Dennis’ post was very useful when I was getting started. The post introduced me to TagSoup parser which is a very useful Java library for parsing HTML.

I eventually did not use TagSoup but instead ended up using NekoHTML. I had to parse a HTML that wasn’t always formed the same way. For example,

        <head>
            <title>Hiya!</title>
        </head>
        <body>
            <table>
                <tr>
                    <th colspan='3'>Settings</th>
                    <td>First cell r1</td>
                    <td>Second cell r1</td>
                </tr>
            </table>
            <table>
                <tr>
                    <th colspan='3'>Other Settings</th>
                    <td>First cell r2</td>
                    <td>Second cell r2</td>
                </tr>
            </table>
        </body>

This HTML isn’t well formed and if I had to parse it I would be lost if someone decided to insert a TBODY tag in the table. This is where NekoHTML comes in. It converts this HTML into well-formed XML that can then be read by XmlSlurper.

First we define the parser.

def parser = new org.cyberneko.html.parsers.SAXParser()
parser.setFeature('http://xml.org/sax/features/namespaces', false)

We must set the parser to ignore namespace because we don’t really care about it. The parser has a host of other options that can be set including the ability to remove certain elements. I haven’t tested this (because I couldn’t get it to work with Groovy) but I can imagine that this can be very useful sometimes when you want to get rid of text formatting in HTML.

Next, define our slurper giving it our newly created parser. Ask the slurper to parse the text and we get a page.

def slurper = new XmlSlurper(parser)
def page = slurper.parseText(html)

Because our slurper is “groovy” we can now access the body of the HTML document directly without the need to execute GPath expressions although that is still possible to do.

As an example, I wanted to find the first table that had a particular heading. In the HTML above this is the table with heading “Settings”. To do this you just do. Disclaimer: I had help from SO where I asked how to do this.

def settingsTableNode = page.BODY.TABLE.find { table ->
  table.TBODY.TR.TH.text() == 'Settings'
}

I can now access all other rows of the table because I have the table node with me. This makes scraping extremely easy to do. I can read parts of the table that I want or perform further “find” on the table node to get other sub-entities.

Groovy doesn’t only make XML parsing easy but also HTML parsing. :)

Comments Off (5,659 views)
February 12th, 2012 | Tags: , ,

I’ve started expermenting with griffon again and I learnt a cool new trick today. Pre-0.9.2 all controller actions were called within the UI thread. This meant that if you needed to do anything long-running you’d need to move it to a separate thread or your application would become unresponsive (atleast in the eyes of the user).

Post-0.9.2 this is no longer necessary because all actions in a controller are called outside the UI (or EDT) thread. This is not a griffon specific feature but is present in groovy’s SwingBuilder. It’s now trivial to execute code within the edt. As an example, I want to build a file chooser within the controller when an ‘Open File’ action is invoked.

First, define a filechooser in your controller and then initialize within the edt thread.

def fileChooserWindow;
edt {
	fileChooserWindow = builder.fileChooser(currentDirectory: baseDir, dialogTitle: 'Choose an Html file',
					fileSelectionMode: JFileChooser.FILES_ONLY)
	openResult = fileChooserWindow.showOpenDialog(view.mainFrame)
}

The swingbuilder then ensures that your code executes within the edt and because the last call is blocking in nature till the user replies the rest of your controller does not execute. This is extremely convenient because you can now execute UI actions from the controller by dynamically creating UI components as and when you need them and even disposing of unused components.

Note: Andres Almiray suggested that I prefix the fileChooser initiailization with a builder because “… nodes are not added to controllers. You must define a builder property on the controller”. I don’t quite know what that means but because it’s Andres, I went ahead and did it.

In case you’re wondering what baseDir is, its initialized like this in my controller.

def baseDir = Metadata.getCurrent().getGriffonStartDirSafe() as File

The Metadata class is a very useful class that you can use to access information about your application.

1 comment (17,922 views)
October 20th, 2011 | Tags:

It’s frustrating to find all your settings gone when you upgrade eclipse versions and if you’re pedantic about certain settings (like I am) it can take you almost an entire day to upgrade versions (with everything working just the way you like it). With Indigo, fortunately, none of the manual settings were necessary. There was a very nice import wizard that allowed me to import my old Helios installation. The wizard is in File->Import->Install->From existing installation.

I was really hoping everything would go as planned but it didn’t. (It’s okay, I had very little expectation it would) Maven refused to build any projects. I checked the maven version and it was the embedded one. I figured since this was a little older than the latest maven3 release, it could be causing problems so I changed the installation to use a manually installed version of maven.

No dice. After a few minutes of searching, I found the problem (or so I thought). The maven builder that eclipse was using was missing.

That’s odd. Maven integration is built into Indigo now so there is no reason why it shouldn’t find the builder. Not good. I tried a few things but then I found this very useful thread on stackoverflow. It turns out that the .project eclipse files weren’t updated correctly. I right clicked on the broken projects and saw an option to “Convert into maven project” in the Configure menu.

On clicking this option, eclipse “transformed” my project into a maven project. I checked the .project and noticed that it had added another builder in there. All good but the project still doesn’t build. Grrrr… Check the builders again and this is what I see

Why didn’t it remove the useless builder? If I uncheck the missing builder my build still doesn’t run! Argh.

I see a dependency which doesn’t look like it should be in my build path, this is added because of the old builder (which I hate by now).

Once I remove this dependency from the build path it works! I’d suggest that you not upgrade eclipse on a weekday like I did (on a Wednesday too!) because this is one of the more frustrating downsides of eclipse. Hopefully, the next upgrade will be easy(hah!).

Comments Off (3,725 views)
July 20th, 2011 | Tags: ,

I am unwell and stuck at home so I decided to dive into my Android app (a new one this time!). I wanted to perform some tasks that could be long running (doubtfully but why freeze the UI thread ever). I want to show a ProgressDialog here but it doesn’t work with the static factory method, ProgressDialog.show(…). I keep getting the same exception again and again.

07-20 22:57:16.445: ERROR/AndroidRuntime(25843): FATAL EXCEPTION: main
07-20 22:57:16.445: ERROR/AndroidRuntime(25843): java.lang.NullPointerException
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.app.ProgressDialog.onProgressChanged(ProgressDialog.java:318)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.app.ProgressDialog.setMax(ProgressDialog.java:233)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at com.codercorp.android.playlist.ExporterTask.onPreExecute(ExporterTask.java:68)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.os.AsyncTask.execute(AsyncTask.java:391)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at com.codercorp.android.playlist.PlaylistExporterMain.onClick(PlaylistExporterMain.java:81)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.view.View.performClick(View.java:2485)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.view.View$PerformClick.run(View.java:9080)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.os.Handler.handleCallback(Handler.java:587)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.os.Handler.dispatchMessage(Handler.java:92)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.os.Looper.loop(Looper.java:130)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at android.app.ActivityThread.main(ActivityThread.java:3683)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at java.lang.reflect.Method.invokeNative(Native Method)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at java.lang.reflect.Method.invoke(Method.java:507)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597)
07-20 22:57:16.445: ERROR/AndroidRuntime(25843):     at dalvik.system.NativeStart.main(Native Method)
07-20 23:02:38.425: ERROR/jdwp(25924): Failed sending reply to debugger: Broken pipe

After googling for a good amount of time I came across this bug report in the Android project http://code.google.com/p/android/issues/detail?id=3114. The report’s been up for over two years now but Google hasn’t bothered to fix it (shame!). As is mentioned in the report, the problem is with the update handler variable not being set before the dialog box is shown. The solution is simple, create your own ProgressDialog without using the static factory method.

ProgressDialog progressDialog = new ProgressDialog(PlaylistExporterMain.this);
progressDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL);
progressDialog.setCancelable(false);
progressDialog.setIndeterminate(false);
progressDialog.setMax(getListAdapter().getCount());
progressDialog.show();

If you have run across this bug, please go to the bug report and leave a comment, maybe it’ll force Google’s hand into fixing the bug (which requires an addition to the ProgressDialog static factory methods).

3 comments (7,006 views)