Blog

technology

PWAs are moving fast. A 2019 update.

Erik
Erik

Progressive Web Apps have been around for a while and we - being the eager, happy web devs that we are - have already incorporated them in our standard practices. Faster loading times, offline availability, full-screen, notifications. Great. Still, there was something we hadn't noticed. During the Performance.now() conference we visited last November we became increasingly aware that PWAs are not just a great thing for the web ecosystem from a technical or performance perspective. Behind our backs, they have started to make a serious impact on the world as a real alternative for native (store-installed) apps. That breakthrough is not only a result of PWA features. The world has changed. In 2010 an app was gold. Nowadays, we talk about 'app fatigue' and 'broken app discovery'. For web guys like us, this is of course fabulous news. We didn't really like them apps. Time for us to get up to speed with the world and the shift that is taking place. We thought it might be a good idea to put together a nice reading list for the Christmas holiday... In case you will get bored. We proudly present to you: a summary of the best resources we found during our renewed exploration of the state of PWAs. 1) The basics Freshen up your PWA facts and rethink the pros and cons. Many good resources, this is a nice starter All you need to know about PWAs. Alex Russells quote might be a good one to remember "PWAs are just websites that took all the right vitamins”. 2) App fatigue. How bad is it? "The average number of apps a user installs per month? Zero". Great headliner, but it is overstating it a little. It's true for half of all users. The other half installs 2 or 3 apps each month. That still isn't very much. We stick to what we know. To help you form an opinion about the question of app fatigue, the 2017 U.S. Mobile App Report by Comscore is a good resource. 3) Hands-on experience Walk the walk. Read a great article about PWAs by Dutch web agency Voorhoede, "Every project a Progressive Web App". Or for a more balanced view read how Picnic considered native, PWA and hybrid by Lars Lockefeer, once Tweede golf-crusader of the very first hour, now Tech Lead at Picnic. 4) It's easy When a new technique is easy to use for developers they will adopt it quickly. Adding PWA support is not difficult. Take Webpack and add two plugins and you're good to go. 5) Documented impact PWA success stories from Uber, Pinterest and Trivago on pwastats.com. Some of these numbers are big. 6) Google is doing the heavy lifting Google is continuing its push in 2019. See this roadmap for 2019 from the Chrome developer summit. 7) What about Microsoft and Apple? Microsoft is all-in, PWAs are installable on Windows and present in the store. Apple is getting there as of iOS 11.3. 8) Getting your hands dirty This one is not a surprise, but to leave it out of our list is a no-go. The best starting point is developers.google.com on PWA. 9) What more can a PWA do? A PWA can do what the web can do. That might be a lot more than you think: whatwebcando.today. The Payment Request API is an interesting example. 10) The future: PWAs or Apps? Really, this is just opinions and fortune telling. But for what it's worth: a random PWA vs. App poll on jaxenter.com. And that's it... Missing a masterpiece in our list? Let us know! Special thanks to Jason Grigsby for our renewed inspiration. Check his great talk from Performance.now() here. We aim to publish a more opiniated article on PWAs in 2019. Subscribe to get an update.

Prototypen: doe je het wel vaak genoeg?

Erik
Erik

Je bent de technische eindbaas, CTO of anderszins tech lead, en je hebt net een stevig softwaretraject achter de rug. Je had bij aanvang alles goed neergezet en je team goed voorbereid. En toch, gaandeweg bleken niet alle technische keuzes even goed uit te pakken en was er vaak extra effort nodig. Stakeholders werden ongeduldig en toonden steeds minder begrip. Herkenbaar? "Zo gaat dat nu eenmaal met software development". Toch? Waarom achteraf gaan zitten malen over the road not taken? Je kunt je tijd wel beter besteden. Nee, zo makkelijk kom je er niet mee weg ;) Je bent super agile en een retro-monster. Natuurlijk leer je daarvan en doe je het de volgende keer beter. Dus neem even tijd voor reflectie: wat als je belangrijke technische keuzes vaker door middel van prototypen had verkend? What's there to lose? Waarom zou je het niet doen? Het kost je bijna niks. Prototypes maak je snel. Weinig uren en een korte doorlooptijd. Stel, je prototypet 4 x 60 uur in een traject van 3000 uur. Dat is 8% van je capaciteit. Die tijd verlies je makkelijk bij een medium groot technisch probleem. Mogelijk kom je meer van dit soort problemen tegen. Bij een zware misrekening gaat het nog veel harder. Daar komen de gevolgen die het heeft voor je organisatie als geheel en de extra doorlooptijd die je nodig hebt nog bovenop. Daarbij: prototype-fases plan je. Tegenslagen overkomen je. 2nd time is a charm Anders gezegd: prototypen geeft je een tweede kans die je anders niet kreeg. Of in termen van the road not taken: prototypes laten je meerdere wegen (een stukje) volgen, zonder de anders zo zware consequenties van het nemen van een verkeerde afslag. Niet alleen op het hogere niveau van concept-, architectuur- en techniekkeuzes, maar ook bij de technische uitvoering krijg je zo een tweede kans. Een developer die een probleem geprototypet heeft, komt bij de echte implementatie sneller tot een betere oplossing. Hij heeft het probleem al doordacht en begrijpt het. Hij heeft een beeld van de software die het moet oplossen en al ideeën over hoe het beter kan dan de eerste keer. Conclusie? Lang verhaal kort: wij zijn ervan overtuigd dat je door regelmatig prototypen betere software krijgt. Zonder extra kosten en met een plezieriger traject als bijvangst. Zelf prototypen we dan ook graag. Niet alleen omdat het een leuke en vrije manier is van software maken, maar vooral omdat je er samen heel snel, heel veel van leert. Onze ervaring is dat wij achteraf nooit zeggen – ook de klant niet ‑ dat we het prototype beter hadden kunnen overslaan om meteen te gaan bouwen, zodat we eerder klaar waren geweest. Het gevoel is eerder: Thank God, dat we dit hebben gedaan, want anders waren we zwaar het schip in gegaan. Tot slot: vanwege het ‘samen leren’ zien wij een prototype-traject als een ideale kennismaking met nieuwe klanten, voordat we een samenwerking aangaan. Na afloop weet je wat je aan elkaar hebt en of je samen echt iets kan creëren. Met ons prototypen? Neem contact op met Hugo of Erik.

Maak kennis met Marlon: CTO van Tweede golf en home-automation nerd

Marlon
Marlon

Als je, net zoals ik, heel graag (lees: bijna dwangmatig) wil maken/bouwen/coderen, dan stopt dat natuurlijk niet als je de deur van kantoor achter je dicht trekt. Ook als ik niet bij Tweede golf ben is er altijd wel iets waaraan ik bezig ben. Mijn meest uit de hand gelopen pet project is toch wel mijn home-automation-system. Mijn appartement is inmiddels behoorlijk slim. Met microservices en al. "HANNAS" heet mijn systeem, waar ik nu zo’n twee jaar aan werk. HANNAS weet en meet alles van ons HANNAS' brein bestaat uit een Raspberry Pi, die verbonden is met allerlei sensoren en actoren (verlichting, ventilatiesysteem, etc) in ons appartement. En ze staat in contact met mijn smartphone en met die van mijn vriendin. Ik heb sensoren geïnstalleerd op de deuren, zodat HANNAS weet of er iemand aanwezig is, en we meten continu de temperatuur en luchtvochtigheid in verschillende ruimtes. HANNAS slaat deze data op en triggert op gepaste momenten acties. Zo doet ze bijvoorbeeld alvast de verwarming en het licht in de gang aan - als dat nodig is - als een van ons met zijn smartphone in de buurt van ons appartement is. Steeds slimmer Ik heb recent ingebouwd dat als we gaan koken of douchen HANNAS automatisch het ventilatiesysteem in een hogere stand zet. En dat als ‘s avonds de slaapkamerdeur gesloten wordt, het licht heel geleidelijk dimt. HANNAS heeft het licht in de overige kamers dan al uitgedaan. Erg prettig. Het eerstvolgende wat op de rol staat: ervoor zorgen dat HANNAS bij hogere temperaturen de planten extra water geeft en als het donker wordt automatisch de gordijnen dicht doet. Daarna? Een groot deel van die vervelende repetitieve handelingen in huis, kan je automatiseren. Dat betekent enerzijds een interessante zoektocht naar creatieve oplossingen en, als het dan werkt, anderzijds ook meer gemak. Waarom het mij blijft boeien Dat zit hem met name in die zoektocht naar mooie oplossingen. Dat is naast gewoon leuk bovenal ontzettend leerzaam! Eén van de belangrijkste dingen die ik me door HANNAS echt ben gaan realiseren is dat de betrouwbaarheid van systemen uiteindelijk dé factor is die bepaalt of je echt gemak ervaart. Een voorbeeld: in eerdere versies schrok ik nog wel eens wakker omdat HANNAS midden in de nacht opeens het licht in de slaapkamer op volle sterkte inschakelde. Ook ben ik ooit naar huis gefietst om te checken of er niet werd ingebroken, omdat HANNAS mij een bericht stuurde dat de voordeur werd geopend terwijl ik zeker wist dat mijn vriendin op haar werk was. Bleek gelukkig een bug te zijn. Daarna natuurlijk meteen gefikst. Architectuur Tot slot iets over hoe hoe HANNAS is gebouwd. Ik heb HANNAS opgezet met een microservices architectuur en maak voornamelijk gebruik van dezelfde moderne web technieken en best practices, die we bij Tweede golf ook toepassen. Daardoor is het systeem uiterst betrouwbaar geworden. Door de verschillende taken op te delen in microservices wordt het systeem veel beter meetbaar, testbaar en flexibeler dan wanneer het 'één grote applicatie' zou zijn. NodeJS en websockets worden gebruikt om alles super snel te laten verlopen. Het duurt bijvoorbeeld slechts enkele milliseconden tot de verlichting aangaat na het detecteren van een beweging. Automatische tests zorgen ervoor dat door een kleine aanpassing geen nieuwe bugs worden geïntroduceerd. En dat ik niet nog eens naar huis hoef omdat ik denk dat er wordt ingebroken. Daarnaast is HANNAS - vanzelfsprekend - goed afgeschermd van het publieke internet. Eén van de weinige dingen waarvan expliciet gedefinieerd is dat ze die mag binnenhalen is het weerbericht. De data die ze verzamelt, bewaart ze lokaal en verder triggert ze alleen acties in huis en verstuurt ze notificaties. Conclusie Lang verhaal kort, het was en is ontzettend leuk om aan HANNAS te werken en een elegant systeem neer te zetten dat betrouwbaar is by design, aan de nieuwste standaarden en best practices voldoet en allerlei mooie web-technieken gebruikt. Ik heb er veel van geleerd in de afgelopen 2 jaar en wil HANNAS nog lange tijd doorontwikkelen. Ook heeft het mooie kruisbestuivingen met mijn werk voor Tweede golf opgeleverd. Met name wanneer ik technieken uitprobeerde die we voorheen nog niet tot onze stack rekenden, maar die inmiddels zijn ingeburgerd. Marlon Baeten – CTO en Co-owner bij Tweede golf. Door de hele dag met passie in ons vakgebied bezig te zijn, zowel privé als zakelijk, hebben we veel kennis van het ontwikkelen van betrouwbare, modulaire systemen met een lange levensduur. Op zoek naar een partner die met net zoveel drive als Marlon voor jou aan de slag gaat? Check onze "Hire a Team"-pagina of neem direct contact op.

VR, HMI and HCI

Daniel
Daniel

The interaction between a human and a computer, also called human-machine interaction (HMI) or human-computer interaction (HCI) has changed quite a lot in the past decades. Virtual reality (VR) and augmented reality (AR) have received a revived interest due to the development of devices like the Oculus Rift and Microsoft's Hololens. This considered, HCI will probably change even more radically in the coming years. Short history HCI has been a topic of active research for decades; researchers and artists have invented the most exotic technologies, for instance, Char Davies' art project Osmose whereby the user can navigate by breathing and moving her body. Osmose suit Osmose suit wired The vest is used to measure the breathing of the user Obviously, not every invention made it to the consumer market, but most technologies we use today have been invented long before they became mainstream. There are for instance striking similarities between Google Glass and EyeTap developed by Steve Mann in the 1980's and 1990's. Eyetap vs Glass Eyetap development Development of the EyeTap since 1980 We have come a long way since the interaction with punched cards in the early days. In the 1960's the user interaction happened mostly via the command-line interface (CLI) and although the mouse was already invented in 1964, it became only mainstream with the advent of the graphical user interface (GUI) in the early 1980's. GUI's also made it more apparent that HCI is actually a two-way communication; the computer receives its input via the GUI and also gives back the output or the feedback via the GUI. First mouse First mouse as invented by Douglas Engelbart NUI and gestures Speech control became consumer-ready in the 1990's (though very expensive back then). Interesting about speech control is that it is the first appearance of a Natural User Interaction (NUI). NUI roughly means that the interface is so natural that the user hardly notices it. Another example of NUI is touchscreen interaction, though we have to distinguish between using touch events as replacement for mouse clicks, such as tapping on button element in the GUI, and gestures, for instance a pinch gesture to scale a picture. The latter is NUI, the former is a touch-controlled GUI. Instead of making gestures on a touch screen, you can also perform them in the air in front of a camera or a controller such as the Leap Motion. Gestures can also be made while wearing a data glove Data glove Interaction with brainwaves Wearables such as smart watches are usually a mix between a remote controller and an extra monitor of a mobile device. As a remote controller you can send instructions like on a regular touchscreen, but for instance the Apple Watch has a classic rotary button for interaction as well. Wearables can also communicate other types of data coming passively from a human to the computer, like heart rate, skin temperature, blood oxygen and probably a lot more to come when more types of sensors become smaller and cheaper. Google Glass is a wearable that can be controlled by voice and by brainwaves. By using a telekinetic headband that has sensors for different areas of the brain, brainwaves are turned from passive data into an actuator. Fields of application are typically medical aids for people with a handicap. Google Glass with telekinetic headband Showing a headband with 3 sensors on the skull and one that clips onto the user's ear AR and VR With AR a digital overlay is superimposed on the real world whereas with VR the real world is completely replaced by a virtual (3D) world. Google Glass and Hololens are examples of AR devices. The Oculus Rift and Google Cardboard are examples of VR devices. Google Glass renders a small display in front of your right eye and the position of this display in relation to your eye doesn't change if you move your head. Hololens on the other hand actually 'reads' the objects in the real world and is able to render digital layers on top of these objects. If you move your head, you'll see both the real world object and the rendered layer from a different angle. Hololens rendering interfaces on real world objects Hololens rendering interfaces on real world objects AR is very suitable for creating a Reality User Interface (RUI), also called a Reality Based Interface (RBI). In a RBI real world objects become actuators; for instance, a light switch becomes a button that can be triggered with a certain gesture. An older and more familiar example of RBI is when a 3D scene is rendered on top of a marker; when you rotate the marker in the real world, the 3D scene will rotate accordingly. Instead of a marker you can also use other real world entities, for instance, Layar makes use of the GPS data of a mobile device. VR is commonly used for immersive experiences such as games, but it can also be used to experience historical or future scenes like building that have been designed but haven't been built yet. AR Basketball App Mug An example of a RBI: a marker is used to control a 3D scene Researching VR for web We will be looking at two VR devices in the near future: the Oculus Rift and Google Cardboard. In the coming blog posts we will share the results with you. Links: NUI Wearables Osmose Multitouch The video is made in 2006: note how enthusiastic the audience is about multi touch control, nowadays multi touch control is part of our daily life. Brainwaves First mouse Hololens

Virtual reality and the web

Daniel
Daniel

Nowadays most VR applications are native games that are developed with tools like Unity and Unreal. These games have to be downloaded from the regular app stores, or from other app stores that have been set up by manufacturers of virtual reality headsets, like the Samsungs Gear VR app store. The biggest benefit of native applications is their unbeatable performance, which is crucial for games. However, you can use VR for other purposes as well. For instance, you can add VR to panorama viewers to make them more immersive. Likewise, you could build 3D scenes that are architectural or historical recreations of buildings that you can enter and walk around in with your VR headset. These kind of applications are relatively easy to develop using web technologies. Panorama viewer by Emanuele Feronato The benefits of developing using open web technologies are obvious; you can publish your content instantly without gate keepers (app stores), you can use your own cheap or free tools, there is a culture of collaboration in the web developers' community, and so on. Both Mozilla and Google saw the potential of VR on the web and started to develop an API that provides access to VR devices. Currently only the Oculus Rift is supported, which will probably change as soon as new devices hit the market. Mozilla and Google are working on one and the same API for WebVR, unlike what happened in the past with the development of the WebAudio API. Mozilla has also implemented WebVR in the nightly build of Opera. It is not yet known whether Spartan, Microsoft's new browser for Windows 10, is going to support WebVR. However, it probably is going to support WebVR, since so far Spartan has made a show of virtue as it comes to new browsers standards. Google also created an open source hardware VR device, the Google Cardboard. This is a device made of cardboard that turns a mobile device into a standalone VR headset. The mobile device's gyroscope, accelerometer and magnetometer are used to track the rotation and position, and the 3D content is rendered by the device itself. The Google Cardboard combined with the WebVR API and web technologies for generating the 3D scene makes creating VR application achievable for a large audience. Google Cardboard The WebVR API is able to detect a connected VR device, or if the browser is running on a device that can be used as a standalone VR device such as a mobile phone or a tablet. A single physical VR device shows up as a HMDVRDevice object and as a PositionSensorVRDevice object, but both objects share the same hardware id so you know they are linked. The first object contains information related to the display and the lenses such as the resolution, the distance between the lenses and the distance from your eyes to the lenses. The latter object contains information about the position, rotation and velocity of movement of the device. To create the 3D content you can use a myriad of javascript 3D libraries, but Threejs is by far the most popular and easiest to use library. At Tweede Golf we continually check other libraries but so far we have stuck with Threejs. What's more, Threejs already supports VR devices; there are controllers that relay the tracking data from the sensors available, and renderers that do the stereo rendering for you. Now that WebGL has landed in all browsers across all operating systems, both mobile and desktop, the biggest hurdle for rendering 3D content in a browser is taken away. VR opens great opportunities to change the way we experience the web. For instance, Mozilla is experimenting to render existing web pages with CSS3 and WebGL for VR devices. In the next blog post we show you our first test with WebVR. Links: The Current Status of Browser-based Virtual Reality in HTML5 A series of videos shot a the SFHTML5 meetup about VR and HTML5

The history of virtual reality

Daniel
Daniel

The history of virtual reality (VR) dates back to the 1950's. Since then, a lot of - sometimes quite exotic - devices have been developed. For instance, take a look at this VR cabinet called "sensorama" developed by Morton Heilig in 1962: Morton Heilig Sensorama Nowadays, most VR devices take the form of head mounted devices (HMD). Probably the best known example of such a device is the Oculus Rift. The device looks a bit like safety goggles. Let's dive into some technical details of the Oculus Rift. Oculus Rift and positional tracker The Oculus Rift Developer Kit 2 and positional tracker Displays and lenses For each eye the Oculus has a full hd display on which the 3D content (for instance a game or a video) is rendered. The content has to be rendered in stereo which means that the image for the left display is taken from a slightly different angle compared to the image on the right display. This difference is analogous to the distance between our two eyes. Early stereo image Example of an early stereo image Early stereo image, different camera positions This shows this different camera positions of the photo We look at the image through a set of specially shaped lenses; these lenses distort the image in such a way that the field of view (FOV) becomes larger than the actual size of the displays in the Oculus. In the image below the letter X (in the red box) indicates the size of the real screen, the letter X' (X-prime) is the size of the screen you think you see because you look through the lenses: Oculus lenses The distortion of the image caused by the lenses is called pinch distortion and looks like this: Pinch distortion To cancel out the pinch distortion, the image is rendered with barrel distortion which looks like this: Barrel distortion The netto result of the pinch distortion of the lenses and the barrel distortion of the image is that you see a straight image that is bigger than the screen size of the Oculus. As you can see in the image, a side effect of barrel distortion is that the image is stretched out towards the edges. This means that the pixel density is less in the outer regions of the image. This is not a problem, because it is much like how our own vision works in real life: the objects we see in our peripheral vision are not as sharp as the objects we see right in front of us. Shown in the image below: the red cone is the FOV that we can really focus on, and objects in the green and blue cones are increasingly more blurry. FOV human eye Tracking rotation, movement and position The Oculus has sensors that track rotation and the velocity of your movements; in the device you find a gyroscope, an accelerometer and a magnetometer. Furthermore, the Oculus has 40 leds that are tracked by the separate positional tracker device. This device looks a bit like a webcam and ideally you mount it on top of your computer monitor. The data coming from all sensors and trackers gets combined in a process called sensor fusion. Sensor fusion roughly means that you combine data coming from different sources to calculate data that is more accurate than the data that comes from each individual source. Generating the 3D scene The Oculus has to be connected to a computer; an HDMI cable for the displays and a USB cable that attaches the connector box. The connector box receives both a cable from the positional tracker and from the HMD itself. Of all the data from the sensors are combined to create a 3D scene that is in accordance with the position and movement of your head and your body, which makes you feel like you are actually standing inside that scene. Because the Oculus Rift blocks your vision on the real world and the fact that you are connected to a computer like a goat tied to a pole, it makes it quite hard - if not dangerous - to walk around while wearing an Oculus. Therefore, other devices have been developed that transfer physically walking movements to the computer as well, see images below. On the other hand, it is very likely that in the near future the on-board processor of the Oculus will be fast enough to render the 3D content and thus the Oculus Rift would become a standalone device, like Microsoft's Hololens. VR treadmill This device (currently on Kickstarter) takes it even further: Other devices Besides the Oculus Rift there are numerous other companies that have made or announced HMD's for VR. You can roughly divide them into three categories: 1) devices that have to be connected to a computer, 2) devices that work with a mobile phone and 3) standalone devices. The Oculus is of the first category; it needs a computer for rendering the content. On the one hand the HMD is an extra monitor to your computer, and on the other hand it is an input device that tracks your movements. In the future the connection between the HMD and the computer will probably become wireless. Googles Cardboard is an example of the second category, the phone's gyroscope, accelerometer and magnetometer are used to track the rotation and position, and the 3D content is rendered by the phone itself. Microsoft's Hololens is of the third category. With the increasing power of mobile processors and co-processors for rendering and motion, we will probably see more devices of this type in the future. Advantage of the first category is that you have more processing power for rendering the 3D content, advantage of the second category is that you are not tied by wires to your computer and that it is a relatively cheap solution, provided that you already own a smartphone with decent processing power. The third category combines the advantages of the first two categories. Links: Barrell distortion Nvidia standalone HMD Oculus Rift teardown