Yesterday we went to see The Pearl Fishers by Bizet at ENO. I saw some pictures of the production and it looked quite interesting, very movie-like, which is not surprising since the director is actually a film maker. And it looked indeed very beautiful but that’s pretty much all I liked.
I did not like the main singers at all. The music is so sweet and beautiful and I found their voices quite strident. The soprano’s voice seemed more suitable for operetta and the tenor had no feeling and sweetness in his voice to match the French opera music. The fact that they were singing in English was also not a plus but I could have been ok with that given better singers. On top of this, it was quite warm in the hall and there was a guy in front of us smelling of some weird shampoo or after shave so at intermission we decided to go out and enjoy the beautiful day on Southbank. Not sure if I will be back soon for an opera there 🙁

Yesterday I went again to the National Gallery in London and (among others) saw the huge Venice paintings by Canaletto but this time in a totally different light. I was amazed how much Venice today looks like Venice in 18th century! With other paintings, the passing of time is obvious, but Venice still looks the same. Today you just see less gondolas more vaporetti and people are dressed differently, but that’s all! Now wonder I felt like being in a movie set…

This year for our anniversary we went to Venice. Not sure why we have ignored Venice for so long: maybe because of the whole touristic image it has. I was expecting to be annoyed and disappointed and find it overrated, as I find Paris sometimes. However, I absolutely fell in love with it. It’s not that much about the San Marco Piazza though, which I found impossible to navigate during the day, but about the totally amazing feeling that I had going around a place with no cars, small little narrow streets and with so much water. We were also lucky enough to have only beautiful sunny days and also to have booked an excellent apartment in La Giudecca, right near the water. Staying on La Giudecca allowed us to experience the water at various times of the day, starting from sunrise and till late night,

Morning view from La Giudecca

plus it gave our holiday a much more local, residential feeling, and allowed us to rest in the evening instead of going out to look for a restaurant. I started loving Venice the moment we took the vapporetto to go from Palanca to San Marco. The whole combination of beautiful turquoise water with classical buildings just seemed so picture perfect.

View of Grand Canal from Accademia Bridge

Being for the first time in San Marco piazza was not as impressive as it is so overcrowded.

Crowd on the Bridge of Sighs

I found that the Piazza looks so much better in the evening, around 6-7pm, when most tourists are gone and the sun light is much calmer. So we decided to walk around instead. Somebody told me that what you should do in Venice is to get lost through the back streets. I am not sure how it would be possible NOT to get lost. But that’s one of the things I liked in Venice. You feel like being in a maze and there is a surprise at every street corner: either another campo or a small bridge or just the end of the street! Really wonderful feeling, actually!

However, it makes it very hard to find again that store you liked yesterday and where could not make up your mind if you should buy something or not…

We did not spend too much time in museums this time as the weather was so amazing that it just felt wrong. We did go to see the Peggy Guggenheim Collection, the San Marco Basilica, Santa Maria della Salute and San Giorgio Maggiore. We were especially happy that we went in San Giorgio Maggiore because it is a very beautiful church and the view from its Campanile is amazing!
The Basilica is beyond words. I’ve never seen anything like that and it’s even hard to react to what is inside it. It’s quite stunning, actually, both when you look up, at the ceiling, and when you look down, at the tiled floors.

San Marco Piazza

Detail of San Marco Basilica

In the Peggy Guggenheim museum we did see some very nice pieces and also discovered some quite interesting artists.
We were so fascinated with Venice that we did not go anymore to Verona, as we have planned. We did have a day outside of Venice though when we went to Murano and Burano islands. As expected, Murano is full of glass stores. It has a much calmer feel than Venice and you can find amazing glass pieces there. We did try not to spend too much or buy any big piece but it’s definitely worth going there, even just for looking.
Burano is a very small island and is extremely cute as the houses are painted in very bright and happy colours.

They do have a local industry as they make very nice lace and there are lost of stores selling lace-based products. Besides the photographic frenzy we got into there, we also discovered some really tasty cookies: Bussola Buranello. They look very much like any other cookies but they are really, really tasty!
All in all, being in Venice was a very amazing experience for us and I felt really sad leaving it! Not sure why but Venice feels like a living being, not just a city you go and visit. Hmm, should explore this feeling 🙂 Anyway, we will so definitely want to go back there soon!

In a few days, we’ll be back to Helsinki for a few days. I am quite looking forward, actually, to seeing familiar people and places. Trying to remember places I liked in Helsinki I realized that in our less than 2 years there we did manage to try out quite some cafes and restaurants 🙂 I am not sure how many we can revisit though during this short visit. However, some favourites are already on the list.

For the past week I have been playing with the Samsung i8910HD and tried to figure out if I would consider having one as my phone or not. First impressions: it is quite big but very well built and really nice looking. The first thing you want to try is video playing and it delivers beyond any expectations. The image quality and the sounds are amazing. The colours are amazing, much better than on my iPod Touch. The sound with headphones is as amazing as without. I really liked the 5.1 option that you can enable when having the headphones on.
I also liked the fact that you can use an alphanumeric keyboard, like the one on my N95 8GB phone, with T9 prediction. I like it much more than the QWERTY one that iPod Touch/iPhone has. Of course, you can also have that option on the Samsung.
Most of the experience on the phone was familiar, as it is Symbian-based, as my phone, so I did not have much to get used to on menus and things like that. However, since this is a PDA-size device and mainly touch-based, I did have different expectations from it than from my smaller phone. I looked at it mainly as a replacement for my iPod Touch+phone, which it currently is not mainly because it does not deliver the same experience. Given the processor, everything happens quite fast on it and the scrolling works well, in general, but it becomes more sluggish in the web browser. It did seem to get faster when I tried it with the Opera Mini but the browser itself did not inspire me much. I would have even put up with that, given the amazing device it is, but the killing point was the application offering space. Once you get used to how easy and convenient is to find and install applications on an iPod Touch it’s hard to accept the Google+install method and to paying £5-£10 for an app. As a phone, the application issue would not matter much (I did not buy much for my current or previous phones either) but as a PDA/media device, I do like to be able to get all sorts of fun or useful apps easily and cheap, like Traffic, London Tube, London Bus, etc.
iPod’s integration with the iTunes store was not really a major point as lately I started buying from other online stores (Amazon UK, Tesco, that have better prices and sell mp3.
So, for now, I decided that my N95 8GB is still the best phone for me, size and features-wise, and my iPod Touch still the best overall media device (even though it’s not as advanced technically as the Samsung). Yes, I did think of combining them into an iPhone but I just cannot stand some of its limitations, like the inferior quality of the camera lenses, the (built-in) battery lifetime, the lack of additional memory card usage and the limitations of Bluetooth connectivity. I am also not so sure that having a media device as a phone is a good idea as I tend to play a lot with it and that would mean I would run out of battery on my phone! I prefer to run down the battery on my media device and still have a phone 🙂

We just came back from a different kind of trip: on 16 June my father had a heart surgery to replace his aortic valve. The operation was done at Papworth Hospital by Mr. Frank Wells. As a parenthesis, I find it quite weird that in UK surgeons are called Mr. As explained here, it seems to be a historical thing, but still weird…
The operation usually lasts around 4-5 hours, including all the preparation as well as the final details.
However, the actual time of the open heart surgery is more around 1 hour. My father was taken from the ward around 11am and he was brought to ICU around 2:30pm. It was quite stressful to sit and wait for the news. I have never been in such situation and I had no idea how it feels to go through it. I felt somehow better thinking that Papworth is one of the best heart surgery hospitals in UK and in the world and Mr. Wells is one of its best surgeons. That gave me quite some reassurance and I decided to put all my trust in him. We were also lucky that Papworth has such a nice lake and we spend the whole time outside, which helped a lot. Here are some photos from the lake.
Most annoying was that we barely had any signal on mobiles (crappy Vodafone coverage!) and we had to try to find a bench where we had 1-2 bars. Dirk seemed to be a good antena, so he had to stay put with the mobile phone on him 🙂
The call came around 2:45pm. The nurse from ICU said that all went perfect and that we can see him. We went first there for a few minutes, then, in a couple of hours, we went up to be there when he wakes up. It took some hours for that to happen, as the sleep medication slightly went away. At around 8:45pm they finally removed the oxygen tube and we could finally go home and rest. The following days were mixed: some days better some worse. Nothing major happened, but I never knew what to expect. My father was very chatty and positive one day and very gloomy another day. Considering the type of operation, I was impressed that he was able to start moving around 2-3 days after it. Though they say that the hospital stay is usually 5-7 days after operation, we could not go home because some blood values went too high and we had to wait till they came down. Which was not entirely bad, as my father gained more strength and confidence in the meantime. He was finally discharged on 26 June. He is now doing quite well and getting better everyday.

A couple of weeks ago I joined in the Apple touch “revolution” by getting an iPod Touch. In the meantime it has become my constant companion around the house, as I use it for checking emails, checking news, getting on Skype or Facebook, listening to music, or simply playing some Solitaire. I love the screen, which is bright and sharp. I added lots of applications, which is obviously made very easy by Apple, either through the iTunes or through the AppStore widget on the screen. I now have Skype, Facebook, Yahoo! Messenger, Internet radio, London Tube, traffic updates for UK and for the London tube, and all sorts of other cool and useful applications.
Overall, I really love the iPod Touch as a surfing device and it has by far more good than bad things. The major negative issues are related to the text input and, sometimes, scrolling by touch. Even though I do like to check my emails on it I hate to write emails on it. I did get better at texting on it but it is still painful, and it’s made even worse by the crappy word completion it has.
I am happy I did not get an iPhone, as I cannot imagine having a phone with such a bad text input experience. I am not sure how people manage to write sms/emails with it. They must have a lot of patience. Plus, I do like a lot to be able to write sms with just one hand, which is quite difficult to do with any touchscreen device, especially when it’s using a QWERTY keyboard in portrait mode! The other thing I don’t like is that the screen is sometimes too responsive to touch and it’s easy to click on things by mistake, especially when trying to scroll. I guess I have to live with this one but I am hopeful that in a future version I could use the keyboard in landscape mode and maybe they would manage to also get their word completion better. As I am happy with it so far things can only get better 🙂

I’ve just attended a 2-day seminar on Computation of emotions in man and machines organized by Prof. Peter Robinson and Dr. Rana El Kaliouby at The Royal Society in London.
The seminar had an amazing lineup of speakers, important figures from emotion-related research: Paul Ekman (Univ. of California San Francisco, USA), Rosalind Picard (MIT Media Lab, USA), Cynthia Breazeal (MIT Media Lab, USA), Roddy Cowie (Queen’s University, Belfast), Jeffrey Cohn (University of Pittsburgh, USA), Maja Pantic (Imperial College London, Univ of Twente, NL), Klaus Scherer (Univ. of Geneva, Switzerland), Kristina Höök (SICS, Stockholm University/KTH, Sweden), Mel Slater (UCL, UK), Ursula Hess (Univ. of Quebec, Canada), William (Bill) Gaver (Goldsmiths, University of London, UK), Amy Baylor (Florida State University, USA), Simon Baron-Cohen (Cambridge Univ., UK), Beatrice de Gelder (Tilburg University, NL, Harvard Medical School, USA), Catherine Pelachaud (Universite de Paris, France), and Chris Firth FRS (UCL, UK). Here is an the agenda and the description of the seminar.
In summary, there were 2 days full of very interesting presentations on sensing, recognizing, modelling and using emotions, with lively discussions about the purpose and challenges in the area. I will try to give a more detailed description of the talks. However, I am not sure I will manage to actually capture the essence and the importance of each talk.
Paul Ekman started the first day with a presentation on Darwin’s contribution to the study of emotions and, especially, to the study of facial expressions of emotions. Ekman is one of the most important figures in the study of emotions. Darwin’s (other) great work, “The Expression of the Emotions in Man and Animals”, set out to prove that emotions are universal across humans and animals. Ekman highlighted the main contributions of the book, such as the discrete representation of emotions, the focus on facial expressions and the theory that facial expressions of emotions are universal.
Chris Firth focused his talk on neuroimaging of emotions and on how we tend to mimic the emotional expressions we see on others. He also talked about differences on how people react to emotions exhibited by robots, on perceived trustworthiness based on facial features, like raised eyebrows and eye opening, on communicating with various facial expressions like eyebrow flash. I have to admit that I did not agree with some of the things he presented, especially related to how people react to emotions displayed by robots, which seemed to me quite generalizing especially when thinking of robots like the ones built by Cynthia Breazeal. Another thing I did not agree with was an example of a study on perceived trustworthiness based on some face pictures. I did not fully agree with the categorization of the faces presented in the trustworthy and the untrustworthy categories as I found myself reaching back to past experiences and using similarity to decide which face seems to me more trustworthy and which not. So, his hypotheses about the importance of eye size and height of eyebrows did not seem to hold for me.
Klaus Scherer is a major figure in the area of modelling emotions. He presented his Component Process Model that models emotions as episodes and not as snapshots. His appraisal-based model considers various factors involved in emotion generation as a response to external and internal stimuli.
Kia Höök presented some of her work around the “affective loop” approach to the area, where end users are deeply involved in consuming and managing their own data through creative and personalized means.
Jeffrey Cohn talked about experiments with real-life looking avatar faces on how people tend to adapt their head movements and facial expressions in response to other party’s behaviour during a social communication.
Ursula Hess’s talk focused on other face characteristics that can affect the emotion recognition. In the emotion recognition research, the main focus is on certain features that are associated with certain emotions, but other characteristics of the face itself are not actually considered: man vs. woman, dominant vs. submissive, age, ethnic group, etc. Such characteristics can actually influence the emotional display and, of course, the perception of others.
Maja Pantic’s talk focused on machine learning algorithms for facial-based emotion recognition for non-acted emotions. Most of the face-based emotion recognition research has been done based on acted emotions that are exaggerated and last longer than real-life emotional displays. She presented her research on combining various Action Units for appearance-based automatic emotion detection. She also emphasized the context-aware multi-modal approach to emotion recognition as an emergent and challenging field.
Beatrice de Gelder emphasized the role of bodies in emotion recognition and how they add important information to face expressions. Her neuroscientific research focuses on fMRI-based brain activation detection related to emotion recognition that could tell more about how people recognize emotions in other people as well as what is the emotional response to them. From this point of view, the face-based emotion recognition does not seem to generate much activation compared to the ones based on body language. Body language also plays a much more important role when distance is involved or for people with visual deficit.
The second day started with a talk by Simon Baron-Cohen on efforts to improve emotion recognition in high functioning autistic kids. He talked about a DVD that was created for and with autistic kids and their families and has the goal of enabling kids to better recognize emotions and situations. The result of the studies performed based on the resulted DVD were extremely good though there were no results yet on the long-term effects of such experiment and how well the kids could actually apply the recognition to real world. It was also not clear if the kids can actually change their empathic response to emotions they see or if the improvement is purely cognitive.
Rosalind Picard focused her talk on her group’s current work on autism. She showed and demonstrated sensing systems they developed for detecting stress on autistic kids, which makes it easier for families and educators to detect when stress starts building up before even external signs are clearly visible. She presented various examples of real-users experimenting with the sensing devices.
Roddy Cowie provided a comprehensive view of the emotion research now as well as major challenges. He talked about the emotional colouring, referring to the various types of emotions we experience during the day. He emphasized some new approaches that include probabilities and context in their model for a better definition of the emotional state.
Cynthia Breazeal presented her robots Kismet and Leonardo, as well as their internal model inspired from various domains, both technological and from human understanding. Her videos of Kismet and Leonardo had a very big impact on the audience and it was clear that humans can have a very strong emotional (positive) response to robots 🙂 She also presented one of the last improvements to Leonardo, where he is shown having an internal model about false and true beliefs of two people. I found that video extremely interesting: the scenarios is that Leonardo is following (with his eyes) these two people, one in black and one in red, how they hide some treats in two boxes. Then they both go away and only one of them returns, changing the place of one of these treats. At that point, Leonardo “understands” that one person has a true belief of the situation while the other one has a false belief of the situation, since he was not there when the change was made.
So, when the one with the false beliefs comes back and pretends to be trying to open the box where he thought the treat he is looking for is, Leonardo, by pressing the corresponding button on a remote control, opens the right box, where the treat is actually now located. When the one with the true beliefs comes back and also pretends he cannot open the box with the treat, Leonardo opens the right box with the button. I found it fascinating to watch that!
Catherine Pelachaud’s talk was on Embodied Conversational Agents, and she presented their advances on creating realistic virtual characters that can use multimodal expressions of emotions. To make them look more realistic, their agents incorporate a temporal dynamism of emotions that follows the evolution of emotions. She showed how complex emotion expressions are created by combining various facial expressions.
Mel Slater’s talk was on immersive experiences. He presented some experiments they have been doing by using the CAVE environment at UCL. I found this work quite interesting especially for the ethical issues it raised. Slater introduced the two main components of immersive experiences: PI (Place Illusion) and PSI (Plausibility Illusion). The subjects used in his experiments were wearing sensing devices that computed their heart rate, heart rate variability and GSR (Galvanic Skin Resistance) response in order to measure their emotional response to the simulated situations. The results were that people responded to simulated situations as if they were real before their cognitive side eventually overrode them with the knowledge that the experience is not real. The experiments showed that the actual immersion, where people were also able to move around in the virtual space, generates very strong emotional responses despite the knowledge that all is just simulation.
Amy Baylor’s talk was on importance of appearance of virtual interface agents. Her experiments focused on how to better design agents that are supposed to interact with people so that they are efficient in conveying their message to the end users. One of the examples she gave was an experiment where interface agents were supposed to convince teenage girls that a career in engineering was actually “cool”. The experiment used two basic agents (one male and one female) but it changed their hair, age, dress style, etc. The girls were than supposed to rate the agents on various parameters and to figure out which appearances mattered most when the message was about something intellectual, like engineering. Overall, I think it’s quite interesting use of avatars, though I am wondering what impact would have such tool when used by various companies in their hiring process for customer-facing people. Of course, there should already be enough studies on what counts for customer-facing people in various companies and situations. The advantage of using avatars is that the appearance can be changed very fast but I can very well imaging bad usages coming out of this, as “you don’t fit the profile given by the computer”.
The seminar ended with William Gaver’s talk on designing for emotions. The speaker emphasized the importance of going out and testing systems with end users instead of trying to build perfect systems in the lab. Unlike others, he presented a failed experiment they had where they used about 10 monitoring sensors in a family over a longer time period. The data from sensors was combined to create a horoscope-type of report that included status report like “you had too much to do”, “you are too stressed”, etc. with suggestions like “you should take it slower”, etc. The experiment was eventually a failure as the people in that family did not like to be told redundant or inexact things. I have to say I disagree with the conclusions of the experiment that actually people do not want to be monitored or be told what happened based on data collected like that. At least, I found the experiment not relevant enough for that as I can see lots of problems with the way it was done. His conclusion that the failure comes from its affective computing approach is quite weird as the type of application and the way the user interaction should be questioned first. The output information was too abstract and it did not give them any information they did not know since it hid exactly what they did not really know. I was surprised that they did not even try to find out from people what would they have liked to see from such (or similar) systems. Anyway, for me it just looked more like a bad interface and interaction example that had nothing to do with the sensing part.
The seminar’s final panel focused on ethics and privacy for such systems. As usual, very hard to define and discuss. As with anything it actually matters how it is used and for what purpose. The research on certain areas, like autism, proves that such technology is indeed useful. The user involvement came back again and it was recognized that now it’s the time to look more outside the labs and involve people into the design of such systems.

…quite eventfully for us. Yesterday our whole heating system failed (heat and hot water) so we had to make new arrangements for how to celebrate the New Year’s Eve. We bought a new oil heater and heated only one room where we stayed. Luckily, the food and drink helped us get warmer 🙂
This morning we were lucky because somebody came and managed to fix the problem. So, after all, the new year started quite well 🙂

Happy New Year!