Blog

/images/waarom-taas.jpg
Gepubliceerd op

08-05-2018

Categorie

technology

Waarom een Team-as-a-Service als je je eigen software ontwikkelt?

In onze eerste blog over Team-as-a-Service (TaaS) las je dat we ons met ons 'Hire a Team'-concept met name richten op start-ups en scale-ups die tech first denken, hart hebben voor goede software en weten hoe het balletje rolt. Vaak zijn dit partijen die hun software grotendeels zelf ontwikkelen: het is immers de kern van wie ze zijn. Geen TaaS nodig, zou je denken. Toch kan het nuttig zijn om externe expertise in te roepen. We schetsen drie scenario’s: Scenario 1: legacy voorkomen Als je als start-up of scale-up afhankelijk bent van de software die je ontwikkelt, is het opbouwen van technical debt altijd een serieus risico. Met name voor start-ups is dit risico groot, als bijvoorbeeld in de hectiek op weg naar de eerste release niet alles ‘even netjes’ wordt gedaan. Maar technical debt of niet, als start-up wil je vooruit: focussen op de doorontwikkeling van je product of dienst. Heb je echter zoveel technical debt opgebouwd in een bepaald onderdeel dat het een belemmering wordt, dan kan het een goed idee zijn om een TaaS in te schakelen om dat onderdeel te vervangen. Zo blijf je zelf gefocust op doorontwikkeling en voorkom je in een vroeg stadium dat er legacy ontstaat. Scenario 2: praktische hulp bij het op orde krijgen van je development Voor zowel start-ups als scale-ups die hun eigen development doen, kan het slim zijn om een TaaS in te schakelen als het bij een van je teams niet lekker loopt. Als alternatief voor - of aanvulling op - het inhuren van bijvoorbeeld een Agile coach. Door voor een bepaalde periode een geoliede TaaS ernaast te zetten, ontlast je het team en schep je ruimte om de problemen op te lossen. Door de teams nauw te laten samenwerken of zelfs een gemengd team te smeden, kan er ook overdracht van cultuur en best practices plaatsvinden. Van developers, naar developers. Scenario 3: ontzorging met betrekking tot een side-product of side-technologie Zeker voor start-ups, maar vaak ook voor scale-ups geldt dat het slim is om je altijd beperkte development capaciteit te richten op het business-kritieke deel van je software, terwijl je een TaaS inschakelt om een secundair onderdeel te verzorgen. Stel je bent native app ontwikkelaar, die zijn geld verdient met een mooie app voor Android en een voor iOS, maar na verloop van tijd besluit je om er ook een web interface naast te zetten. Dan biedt het inschakelen van een TaaS heel veel voordelen. Zo hoef je niet à la minute extra mensen aan te trekken of je native ontwikkelaars bij te scholen in web development. Je eigen ontwikkelaars behouden hun focus en kunnen - als dat gewenst - is na verloop van tijd bijgeschoold worden in de technieken die het TaaS gebruikt. Ook dwingt het inschakelen van een TaaS je om je architectuur nog meer los te koppelen. En dat is weer goed voor de onderhoudbaarheid van je software op de lange termijn. Met ons in gesprek? Weten of een TaaS van Tweede golf in jouw situatie goed zou kunnen werken? Neem vrijblijvend contact op met Hugo of Erik. Meer weten over ons TaaS-concept, 'Hire a Team'? Klik hier.
/images/technicaldebt_730.jpg
Gepubliceerd op

12-04-2018

Categorie

technology

Technical debt? Doe de check!

Een auto met mankementen kan prima bij de garage gerepareerd worden. Dat gaat lang goed als een vakman er zorg aan besteedt en defecte onderdelen vervangt met nieuwe, oorspronkelijke onderdelen. Maar combineer een garagehouder die kiest voor het betere oplapwerk en korte termijnoplossingen, met een eigenaar die weinig geld en liefde aan zijn bak besteedt en je krijgt in de loop der jaren met serieuze aftakeling te maken. Van kinderziektes, naar wat van die vreemde geluiden, naar "eigenlijk zou ik er de snelweg niet meer mee op moeten gaan". Zo (ongeveer hè) werkt het ook met software: door overhaast te releasen, onvoldoende zorg te besteden of geen tijd te nemen voor achterstallige reparaties, bouw je schuld op en gaat je software rammelen en gaat de velocity omlaag. Doe de check! Als chief tech weet je natuurlijk best wat de risico's zijn, maar toch blijft het lastig om technical debt op tijd te signaleren. Onze praktische checklist is een supersnelle manier om te ontdekken of er misschien wel actie nodig is. Checklist: Je developers durven delen van de code niet meer aan te raken. Deze zin komt je heel bekend voor: "kunnen we geen workaround bedenken?". Bij estimates van ogenschijnlijk kleine tweaks roepen je developers: "100 uur". Meten is weten en harde cijfers zeggen alles. Dus natuurlijk heb je een tech-debt-tool. Heb je die niet? Je vraagt je developers of ze een rewrite een goed idee vinden. Je krijgt een staande ovatie. Je business collega's vinden de developers extreem inflexibel. Heb je bij drie of meer punten een vinkje gezet? Dan is het misschien wel tijd voor die grote beurt of is het in ieder geval een goed idee een check-up te doen. Start bijvoorbeeld hier thinkapps.com/blog/development/technical-debt-definition-importance/ of hier kellysutton.com/2017/10/24/quantifying-technical-debt.html of hier engineering.riotgames.com/news/taxonomy-tech-debt. Tweede golf verhuurt compacte web development teams. Meer weten? Check onze 'Hire A Team' pagina.
/images/blog-react-server-side-rendering.png
Gepubliceerd op

06-03-2018

Categorie

development

Server-side rendering voor React web apps

In de begindagen van het internet was server-side rendering van HTML-pagina’s de enige optie. De wereld was eenvoudig: bij iedere klik op een link werd een compleet nieuwe pagina opgehaald van de server en getoond in de browser. Naarmate de kracht van javascript toenam, ontstond de mogelijkheid om pagina's ook (al dan niet gedeeltelijk) in de browser te renderen. Door de voordelen van client-side rendering (zie onder) en het feit dat webpagina's steeds meer volledige, interactieve applicaties zijn geworden, zijn er in de afgelopen jaren frameworks ontstaan die client-side rendering makkelijk en efficiënt maken, zoals React, Angular en Vue. Het grote nadeel van client-side rendering is dat de content minder makkelijk gevonden wordt door zoekmachines. Daar staat tegenover dat zoekmachines zich hebben aangepast aan het feit dat er steeds meer CSR-sites zijn bijgekomen. Sommige zoekmachines voeren bijvoorbeeld de javascript uit op pagina's van sites die veel bezoekers hebben, en de Google crawler indexeert tegenwoordig tot op zekere hoogte React-componenten (zie de links onderaan de pagina's). Voordat we ingaan op de manier waarop wij server-side rendering voor React web apps gebruiken, eerst nog eens de voor- en nadelen van server- en client-side rendering op een rijtje: Server-side rendering (ssr) Voordeel: pagina's zijn indexeerbaar voor zoekmachines Voordeel: snelle laadtijd eerste pagina Nadeel: veel contact (en dataverkeer) met de server en daardoor trager, want bij ieder request wordt de hele pagina opgehaald Nadeel: minder controle over de transities tussen pagina’s, zoals animaties Client-side rendering (csr) Voordeel: na de eerste pagina laden de daarop volgende pagina's snel Voordeel: minder server verkeer Voordeel: paginaovergangen kunnen geanimeerd worden Voordeel: pagina's kunnen gedeeltelijk gere-rendered worden (bijvoorbeeld: er wordt een inlogformulier aan de pagina toegevoegd) Nadeel: pagina’s zijn niet ‘out of the box’ indexeerbaar door zoekmachines Nadeel: renderen van de eerste pagina duurt langer omdat eerst alle javascript ingeladen moet worden Best of both worlds Bij Tweede golf bouwen we vaak React webapplicaties waarvoor met name indexeerbaarheid een must is en daarom ssr noodzakelijk. We passen dan de volgende, conceptueel eenvoudige, combinatie van beide render methodes toe: de eerste pagina van de site of applicatie wordt server-side gerenderd en alle volgende pagina's client-side. Omdat iedere pagina van een site of applicatie de eerste pagina kan zijn worden alle pagina's dus door zoekmachines geïndexeerd. Een bestaande React-app kan heel eenvoudig omgebouwd worden naar ssr door gebruik te maken van een speciale methode genaamd renderToString. Hiermee wordt het root component van een React-app (of component) omgezet naar een kant-en-klare HTML string die je vervolgens in een HTML-pagina kunt plakken en door een webserver geserveerd kan worden. React renderen op de server Omdat React een javascript module is heeft de bovengenoemde methode renderToString, een javascript runtime op de server nodig. Alhoewel er libraries zijn waarmee je met een extensie React kunt renderen met php is deze methode niet aan te raden omdat deze libraries vaak traag zijn, nog experimenteel zijn of niet meer worden onderhouden. Wij gebruiken daarom Nodejs met een http-server zoals Express of Koa. Deze server draait via een proxy achter de webserver. Theoretisch zou je ook de Nodejs-server als public-facing webserver kunnen gebruiken, maar wij kiezen liever een volwassen webserver zoals nginx die uitgebreide configuratiemogelijkheden heeft voor https, compressie en caching. Daarnaast is het zo dat nginx veel sneller is in het serveren van statische assets zoals plaatjes, stylesheets, fonts en javascripts. De Nodejs-server serveert dus alleen een HTML-pagina met daarin de React-app en de referenties naar de statische assets die zoals gezegd door nginx geserveerd worden. Als React op de client gerenderd wordt, krijgt de app een HTML-element op een pagina toegewezen, waarbinnen React de DOM-tree kan manipuleren. Deze HTML-pagina kan een statische pagina zijn of een bijvoorbeeld door PHP gegenereerde dynamische pagina. Bij server-side rendering renderen we zowel de React app als de HTML-pagina; op deze manier kunnen we ook in de HTML dynamische data schrijven zoals metatags die uit de database komen. State Rehydration op de client Doordat de pagina op de server gerenderd is, is het feitelijk een statische pagina geworden. Om de navolgende pagina's weer op de client te kunnen renderen moeten we de state rehydration uitvoeren. De javascript code die dit doet zetten we helemaal onderaan in de pagina net voor de closing body tag; hierdoor zie je eerst de hele pagina, vervolgens wordt de javascript ingeladen en ten slotte voeren we de state rehydration uit. Rehydration is het proces waarbij je de client-side state afleidt (extraheert) uit de server-side gerenderde markup. Als je dit goed implementeert, triggert het hydrateren van de state geen nieuwe client-side render cycle. Tijdens het rehydrateren voegt React onder andere eventlisteners toe. Als Redux of een andere state management library wordt gebruikt is het nodig om de initiële state door te geven aan de javascript runtime, bijvoorbeeld via een globale variabele. Meer lezen Client-side rendering vs. server-side rendering. Nieuwe features server-side rendering in React 16. Wel of geen server-side rendering gebruiken? Simpel voorbeeld van ssr met React (N.B. dit voorbeeld is met React 15). Is ssr noodzakelijk voor SEO? SEO en React sites.
/images/team2_730.jpg
Gepubliceerd op

14-02-2018

Categorie

technology

Waarom Team-as-a-Service zo populair is

Het is al zeven jaar geleden dat Marc Andreessen onderwees "Why software is eating the world". Als het toen al niet duidelijk was, dan is het dat nu wel: softwarereuzen knabbelen voortdurend aan traditionele branches en soms gaat het met een minder subtiel, hap-slik-weg. Als het niet Google, Facebook of Apple is, dan zijn het wel "kleine" broertjes zoals Netflix, Airbnb, Spotify of Uber. En als die het niet zijn, dan staan er nog tienduizenden tech start-ups klaar om de wereld beter te maken met een nieuw stukje software. "Every company needs to become a software company" Nou wordt - gelukkig - niet de wereld opgegeten, maar alleen die bedrijven die zich niet aanpassen aan de veranderde situatie. In de jaren na Andreessens publicatie werd de wat dreigend klinkende titel al snel omgevormd tot een praktisch advies: "Every company needs to become a software company". Klinkt spannend, maar wat betekent dat voor jouw bedrijf of organisatie? Hoe doe je dat? "Every company needs to become a software company". Oke, check. Dan doen we dat toch? Een kwestie van de juiste expertise binnenhalen en een goed team samenstellen. Uhm, nou, dat is niet zo makkelijk. Stel, je laat even buiten beschouwing dat software development een vak apart is, en dat je op z'n minst de kennis en ervaring moet hebben om dat goed te managen, dan loop je nog direct tegen het meest acute probleem aan: goede developers zijn nauwelijks te vinden. Wat te doen? Je kunt bij een detacheerder aankloppen, maar die doet wat hij of zij moet doen en loopt na afloop met de opgedane kennis weer de deur uit. Nog meer tijd en geld investeren in recruitment is een andere mogelijkheid. Maar in deze krappe markt is het de vraag wanneer je je team compleet hebt en je velocity op peil is. Team-as-a-Service We hebben steeds meer software nodig, developers zijn moeilijk te vinden en dan is ook nog de gewenste time-to-market (altijd...?) kort. Niet op te lossen? Daar biedt Team-as-a-service (TaaS) uitkomst: neem een stevige short-cut en huur direct een compleet team in. Je haalt alle capaciteit, competenties en ervaring in één keer in huis. Van UX-er tot hardcore developer. De groeiende aantrekkingskracht van TaaS zie je ook terug in het aantal aanbieders. In twee jaar tijd is dat in Nederland van een enkeling naar tientallen gegaan. Zes voordelen van TaaS Wij signaleren de volgende redenen waarom TaaS zo aantrekkelijk is: Het speelt in op de "as a service"-trend, het ontzorgt, je hebt altijd je development capaciteit beschikbaar Je raakt niet verstrikt in de werving en selectie of recruitment madness van de huidige arbeidsmarkt Je leverancier - niet jijzelf - zorgt voor de cultuur en randvoorwaarden die voor developers aantrekkelijk zijn, zodat het verloop klein is Development kent vele specialisaties; je kiest wat je nodig hebt en haalt in één keer de hele stack in huis Een team biedt meer continuïteit dan een verzameling freelancers of gedetacheerden Je leverancier is voortdurend bezig met innovatie om zijn teams cutting-edge te laten zijn en blijven, want dat is hun brood, ook een zorg minder voor jou Niet alleen maar voordelen Natuurlijk heeft TaaS ook nadelen. Hoe zorg je er bijvoorbeeld voor dat je grip houdt op een ingehuurd team? De meeste TaaS-aanbieders hanteren een vorm van scrum en bieden je dus grip door je eigen product owner op het team te zetten. De PO bewaakt toegevoegde waarde en stelt prioriteiten, scrum biedt een helder proces en duidelijke rollen. Een andere prangende vraag is vaak: hoe hou ik de kennis binnen mijn bedrijf? Aan die vraag - en andere aspecten van Team-as-a-Service - besteden we in een volgend artikel aandacht. Hire a team Bij Tweede golf ontwikkelden we onze eigen Team-as-a-Service-variant, die we 'Hire a team' doopten. We hebben een specifieke doelgroep: start-ups en scale-ups die tech-first denken, die hart hebben voor goede software en weten hoe het balletje rolt. Ook daarover vertellen we later meer. Nu al meer weten? Neem contact met ons op of check onze 'Hire A Team' pagina.
/images/product_owner_730.jpg
Gepubliceerd op

15-01-2018

Categorie

technology

De ideale product owner: 5 tips

Bij Tweede golf werken onze development teams altijd nauw samen met een product owner van de klant. Hoe beter de samenwerking, hoe beter het resultaat. Waar zou de ideale product owner volgens ons aan moeten voldoen? Hieronder vijf tips voor de ideale PO m/v. 1. De PO is de koning(in) van de toegevoegde waarde De product owner 'owns value'. Hij of zij kan als geen ander inschatten wat de waarde is van een feature voor zijn of haar stakeholders. En wat waarde creëert op de lange termijn. Daarbij hoort ook goed kunnen prioriteren en nee kunnen zeggen tegen zaken die niet als waardevol worden beschouwd. 2. De PO kent zijn pappenheimers Een goede PO kent zijn of haar organisatie als de beste. Hij of zij voelt verwachtingen en mogelijke zorgen van de stakeholders goed aan en is in staat dit te managen en breed draagvlak te creëren. Door stakeholders te betrekken, goed te luisteren en bruggen te slaan waar nodig. 3. De PO houdt zich staande... ... terwijl er aan alle kanten aan hem of haar wordt getrokken. Het is een stevige persoonlijkheid die goed functioneert tussen stakeholders enerzijds en het development team anderzijds. Want laten we wel zijn: stakeholders staan soms op hun strepen en developers zijn vaak eigenwijs. 4. De PO is de aanvoerder van het team Uiteindelijk neemt de product owner de belangrijke beslissingen, maar als aanvoerder van het development team betrekt hij of zij daar altijd de teamleden bij. Na verloop van tijd raken developers en PO steeds beter op elkaar ingespeeld. Termen als ‘aan een half woord genoeg hebben’ en ‘het moet zijn alsof hij hier werkt’ horen daarbij. 5. De PO vindt software maken leuk Tuurlijk, het gaat de PO in de eerste plaats om het uiteindelijke product. Maar het maken zelf en het software proces boeit hem of haar ook. Nieuwe dingen leren en begrijpen wat de software beter maakt. Ruwe bolster met een klein nerd-hartje. Voldoe je hieraan, dan ben je onze droom PO. Nieuwsgierig geworden naar of we al eens met onze ideale product owner gewerkt hebben? Of wil je meer weten over het inhuren van een development team van Tweede golf? Check dan tweedegolf.nl/hire-a-team. Met ons concept ‘Hire a Team’ staan we direct klaar om aan de slag te gaan. Informeer eens naar onze mogelijkheden en daag ons uit!
/images/jsaruco2.jpg
Gepubliceerd op

24-05-2016

Categorie

development

Augmented Reality with web technologies

With the start of the implementation of the WebRTC API around 2012, javascript developers gained access to the video and audio streams coming from webcams and microphones. This paved the way for augmented reality (AR) applications that were build solely with web technologies. Using webtechnologies for AR is also called "the augmented web". Most AR applications use a webcam feed that gets analyzed for certain patterns, for instance a simple color field, a movement or a marker. Nowadays there are several other ways to augment the reality of a webpage, for instance using geolocation or devices such as the Leapmotion. In this post I will focus on AR with markers. Two libraries While some developers have created their own AR libraries, most developers use either JSAruco or JSARToolkit. Both libraries are javascript ports from ancient C++ libraries. JSAruco is based on OpenCV and JSARToolkit is a port of ARToolkit via the in-between ports NyARToolkit (Java) and FLARToolkit (Actionscript). The inner-workings of the libraries is as follows: a snapshot of the video feed is taken by copying image data from the video to a canvas on every animation frame. This image data gets analyzed for markers, and the position and rotation of every detected marker is returned. This information can subsequently be used to render a 3D object on top of the video feed, thus augmenting the reality. Three.js works very well with both JSAruco and JSARToolkit and I made 2 simple examples that show you how to use the libraries with Three.js, the code and some markers are available at Github. 3D model rendered on a marker Markers A marker is usually a grid of small black and white squares and there are certain rules for how these black and white squares must be patched together. Diogok has put code on github that generates all possible JSAruco markers. Note that JSAruco markers can not be used with JSARToolkit; you have to use the markers that you can find in the repository on Github, in the markers folder. Both libraries support multiple markers, which means that each distinct marker gets its own id, and this id can be used to couple a model (or an action) to a specific marker. For instance I made this small test using multiple markers: Conditions In the process of analyzing, the image is turned into an inverted plain black and white image. This means that a pixel is either white or black and this makes it very easy to detect a marker. For the best results, good bright lighting is mandatory. Also putting the marker on a surface with a plain color is recommended. If possible, using backlight is ideal. In general you should turn off the auto focus of your webcam. Marker detection Performance In JSARToolkit you can set the size of the image data that will be processed: parsing smaller images is faster but on the other hand smaller images have less detail. Besides tweaking the size of the image being processed, you can set the threshold, which is the color value that classifies whether a pixel will become white or black. In JSAruco the size of the image data has to match the size of the canvas that you use to render the 3D scene (in our case: where we render the Three.js scene). I have noticed that if the width of the canvas is more than about 700 pixels, JSAruco starts to have difficulties detecting markers, and the wider the canvas, the more severe this problem becomes. In general JSARToolkit performs better than JSAruco, but both libraries suffer from missed or wrongly positioned markers, resulting in an unsteady presentation. You can compare both libraries yourself using the simple test applications that I mentioned earlier. Code is at Github. Web or native On iOS you don't have access to the streams coming from a camera or a microphone due to restrictions put in place by Apple. So on this platform it is impossible to create an AR application with only web technologies. Since mobile devices have become ubiquitous, you see an increasing number of native AR libraries for mobile platforms appear on Github, especially for iOS. The benefits of native are twofold: better performance and control over all camera settings (contrast, brightness, auto focus, etc.). Better performance means faster and more accurate marker detection and control over camera settings provide tools for optimizing the incoming feed. Moreover you can use the light sensor of your device to detect the light intensity and adjust the camera settings accordingly. Currently you can't use the light sensor API on iOS, but on Android, Ubuntu, FirefoxOS and Windows you can. Conclusion Technically you can build an AR application using web technologies but the performance isn't as good as native AR apps such as Layar and Roomle. For some applications web technologies might suffice, for instance art installations or applications that just want to show the possibilities of AR. The advantage of using web technologies is obvious: it is much simpler to set up an application and it runs on any platform (iOS being the sad exception). The lesser performance is partly because analyzing the image data is done in the main javascript thread, and partly because the lack of control over the camera settings which leads to a poor quality of the incoming feed, for instance due to bad or fluctuating light conditions. On the short term using webworkers may improve the analyzing and detection step, and on the longer term the ever improving performance of browsers will eventually lead to a more reliable marker detection. Furthermore Web API's keep evolving so in the near future we might get more control over the camera settings via javascript. The draft version of the MediaCapture API already shows some useful future capabilities. Also there is a draft Web API for the light sensor that is currently only implemented in Firefox. The future of the augmented web looks bright.
/images/minecraft.jpg
Gepubliceerd op

16-02-2016

Categorie

development

React and Three.js

In the autumn of 2015, we got to know the popular javascript library React very well, when we used it to create a fun quiz app. Soon the idea arose to research the usage of React in combination with Three.js, the leading javascript library for 3D. We've been using Three.js for some years now in our projects and we expected that using React could improve code quality in 3D projects a lot. Currently, there are two libaries that provide React bindings for Three.js. This post will explore their differences using working examples. We hope it will help you to make up your mind which one to choose. React React has become a popular choice for creating user interfaces. React keeps a virtual DOM and changes in the UI are applied to this virtual DOM first. Then React calculates the minimal set of changes that are needed to update the real DOM to match with the virtual DOM. This process is called reconciliation. Because DOM operations are expensive, the performance benefit of React is substantial. But there is more to React than the performance impact. Especially in combination with Flux, JSX and the debug tools for the browser it is a very powerful and yet easy to use library to create complex UI's with reusable components. Where React ultimately creates html that is rendered by the browser, there is an increasing number of libraries that provide React bindings for libraries that render to the canvas element such as D3.js, Flipboard and Chart.js. There are also bindings for SVG and another interesting experiment is gl-react. React and Three.js For Three.js there are two libraries that provide React bindings: react-three react-three-renderer Three.js keeps a virtual 3D scene in memory which is rendered to the WebGL context of the canvas element every time you call the render method. The render method completely clears the canvas and creates the complete scene anew, even when nothing has changed. Therefor we have nothing to gain performance-wise when using React with Three.js, but there is still plenty reason to use it. React encourages you to create components and move state out of components as much as possible, resulting in cleaner, better to maintain code, and the JSX notation gives you a very clear overview of the hierarchical structure of the components in your 3D scene as we will see in the code examples in the next chapter. Two libraries compared React-three is written in es5, react-three-renderer is newer and written in es6. The following code examples, that both create a simple cube, show us the differences between the libraries. First react-three: import React3 from 'react-three'; let Scene = React3.Scene let Camera = React3.Camera; let AmbientLight = React3.AmbientLight; let Mesh = React3.Mesh; /> And now the same in react-three-renderer: import Scene from 'react-three-renderer' /> /> We see two obvious differences: 1) In react-three we import one object and this object contains all available components. I have given the components the same name as the properties of the imported object, but I could have used any name. The naming convention in React commands us to write custom components starting with an uppercase, which I obied willingly. In react-three-renderer we import one component and the available components are known within this component/tag. This is because react-three-renderer uses internal components, similar to div, span and so on. Note that the names of the components start with lowercases. 2) In react-three the properties geometry and material of the Mesh component are instances of the corresponding Three.js classes whereas in react-three-renderer both the geometry and the material are components as well. React-three has only 17 components, but react-three-renderer strives to create components for every (relevant) Three.js class, thus gaining a higher granularity. Creating components The following example is a Minecraft character configurator that we can use to change the sizes of all the cubes that the character consists of. Screenshot of the Minecraft character configurator It shows you how easy it is to create 3D components with both libraries and how your code benefits from using React both in terms of being well-organised and maintainable. All code is available at github and you can find the live examples here. The code of the main component looks as follows: First we create a section that contains all controls, then we create the scenegraph containing a plane (World) on which the Minecraft character gets placed. As you can see all code specific to the Minecraft character is tucked away in its own component, leaving the hierarchal structure very clear despite its complexity. When we take a look at the code of the Minecraft character component we see how much complexity is actually abstracted away: Here we see a component named Box which is some wrapper code around a cube. By using this component we not only reduce the amount of code in the Minecraft character module, we also abstract away differences between the 2 libraries. This means that we can use the Minecraft character component both in projects that use react-three and in projects that use react-three-renderer. To see the different implementations of the Box component please take a look at the code on github: react-three and react-three-renderer. Importing models The model loaders for Three.js load the various 3D formats (Collada, FBX, Obj, JSON, and so on) and parse them into Three.js objects that can be added to the scene right away. This is very convenient when you use Three.js without React bindings, but it requires an extra conversion step when we do use React bindings because we need to parse the Three.js object into components. I have written some utility code for this which is available at github. You can find two working examples of how to use this code with both libraries in a separate repository at github. The utility is a parser and a loader in one and this is how you use it: let parsedModel = new ParsedModel(); parsedModel.load('path/to/model.json'); After the model is loaded it is parsed right-away. During the parsing step a map containing all geometries is generated. All these geometries are merged into one single large geometry as well and for this merged geometry a multi-material is created. Now we can use it in a React component, in react-three like so: In react-three-renderer we need more code, on the one hand because multi-materials are not (yet) supported so we can not use the merged geometry, and on the other hand because of its higher granularity: let meshes = []; parsedModel.geometries.forEach((geometry, uuid) => { // get the right material for this geometry using the material index let material = parsedModel.materialArray[materialIndices.get(uuid)]; meshes.push( {createMaterial(material)} ); }) {meshes} The createMaterial method parses a Three.js material into a react-three-renderer component, see this code at github. Pros and cons Using React-bindings for Three.js results in very clean code. Usually you don't have a hierarchical overview of your 3D scene, but with React your scene is clearly laid out in a tree of components. As as bonus, you can debug your scene with the React browser tools. As we have seen in the Minecraft character configurator, using React is very efficient for applications that use composite components, and we have seen how smoothly React GUI controls can be connected to a 3D scene. In applications with a flat structure, for instance when you have a lot of 3D objects placed on the scene, the JSX code of your scenegraph becomes merely a long list which might be as hard to understand as the original Three.js representation of the scenegraph. However, with React you can split up such a long list in a breeze, for example by categorizing the 3D objects: Sometimes using React requires some extra steps, for instance when loading 3D models, and sometimes it might take a bit time to find the right way of implementing common Three.js functionality like for instance user controls or calling Three.js' own render method manually. To elaborate on the latter example: by default both react-three and react-three-renderer call Three.js' render function continuously by passing it to Window.requestAnimationFrame(). While this is a good choice for 3D games and animations, it is might be overkill in applications that have a more static scene like applications that simply show 3D models, or our Minecraft character configurator. In both libraries it is possible to turn off automatic rendering by setting a parameter on the scenegraph component, as you can see in the code of the Minecraft character configurator. Conclusion For the types of project that I have discussed above I would definitely recommend using React bindings for Three.js. Not only your code will be better set up and thus better maintainable, it will also speed up your work significantly once you have acquainted yourself with the workflow of React as well. Whether you should use react-three or react-three-renderer depends on your project. Both libraries are relatively new but as you can see on Github the code gets updated on a weekly basis, and moreover there are lively discussions going on in the issue trackers and issues and suggestions are quite swiftly picked up. Some final remarks that can help you make up your mind: react-three depends on Three.js r72 React version 0.14.2, react-three-renderer works with the most recent versions of both Three.js and React. react-three-renderer has not yet implemented all Three.js features, react-three does (mainly because its lesser granularity). in react-three the ray caster doesn't work i.c.w. controls like the OrbitControls, in react-three-renderer it does. both libraries provide excellent examples, studying these will give you a good grasp of the basic principles. Don't hesitate to get in touch with us, if you have any questions or remarks about this post. Feedback is much appreciated.
/assets/img/blog/physics-engine.jpg
Gepubliceerd op

04-09-2015

Categorie

development

Some fun with physics in Three.js

We all want our 3D visualisations to be as real as possible. A basic premise seems to be that they adhere to the laws of physics. No small feat! Or is it? We decided to give it a go during a two-day programming contest. Our team's idea was to develop a web-based game where the user cycles around and has to avoid crashing into cars. To create the game, we needed a physics engine. As could be expected, 48 hours later we found out we had been overly ambitious. There really was no game experience to speak of. However, we did manage to create a 3D world with basic physics. Hopefully, our experiences will give you some insight into using physics in 3D. Image taken from our simplified 3d representation of the city of Nijmegen Making a game with Physijs Our goal was to make a small game in which you cycle through Nijmegen and have to avoid being run over by cars. We first looked into some frameworks that offer physics simulation in the browser. We found [Physijs] which integrates with [Three.js] and uses [ammo.js], which is a javascript port of [bullet]. Physijs runs all the physics simulation in a web worker. The worker makes sure that it does not interfere with the renderloop. To set up Physijs you need to point it to its web worker and the ammo script that it requires. The next step is to create a Physijs.Scene and call the simulate function of that scene in your update function. After having done the basic setup, we needed to add physics objects. To do this we needed to create Physijs meshes instead of the default Three.Meshes. There are a couple of Physijs meshes but the most useful, lightweight meshes are Physijs.BoxMesh, which uses the bounding box as physics object, and Physijs.SphereMesh, which uses the bounding sphere. Check out [demo of Physijs] to get a feel of what is possible. For more demos check out [Physijs] Crashing cars For our little game prototype we used Physijs.BoxMesh for the cars, the surfaces and the bicycle. For the buildings we used the Physisjs.ConcaveMesh, because multiple buildings needed to be grouped together in chunks and this would cause collisions otherwise. We must note Physisjs.ConcaveMesh is a very costly physics object to have in your simulation, but luckily the performance turned out to be reasonable. In our final demo we managed to get cars moving as physics objects, following the roads that we had extracted from the government database [TOP10NL] (in Dutch). Moving the bicycle proved to be harder, because it is not a physics object. When we integrated the bike into the physics we ran into errors concerning the web worker that simulates the physics. The main difference between the bike and the cars is that the bike should have used the 'applyCentralImpulse' function which unfortunately could not be found in the simulation and the cars used 'setLinearVelocity', which did work. During the final hour of our hackathon we still weren't able to get the bike working. The result is the hilarious demo in which you can see cars mindlessly drive around, crash into each other and then bounce out of the 'world' again. Room for improvement As you can see in the movie the cars behave a bit strangely when tipped over. This is because the Physijs.BoxMesh allows cars to slide on their front as the physics mesh is flat at that location. Also note that not all roads match up perfectly, because of inaccuracies in the data set. Even though the demo mainly looks funny in this state, it does show correct car/house, car/car, and car/surface interaction as far as physics is concerned. It just requires more tweaking to make the cars behave in a more realistic way. Given an extra day or two, we surely would've succeeded in making a mind-blowing game! [demo of Physijs]: http://chandlerprall.github.io/Physijs/examples/vehicle.html/ [Physijs]: http://chandlerprall.github.io/Physijs/ [Three.js]: http://threejs.org/ [ammo.js]: https://github.com/kripken/ammo.js/ [bullet]: http://bulletphysics.org/wordpress/
/assets/img/blog/yga-beregening2-5.png
Gepubliceerd op

14-08-2015

Categorie

Tech

Adding artificial intelligence to 3D design

In 2014 we won an innovation grant from the province of Gelderland based on our proposal to provide 'intelligent' gardening advice to users of Draw Your Garden (Dutch: Teken Je Tuin). We created this web application for one of our clients and we have been gradually expanding it since its release. In the app users can both design their garden and view it in 3D as well as order products and contact gardeners, who in turn can submit proposals based on the users' design. In this article we will give you an overview of how we created methods that allow users to design their gardens intelligently. We think these methods could translate nicely to domains outside gardening. We are curious if you feel the same after reading this article. Draw Your Garden / Teken je Tuin Partnered with our client we proposed to develop a number of prototypes aimed at assisting users in several areas which hitherto required extensive expert knowledge. Appropriately, we named this project 'Your GardenAssistant'. For the project we focused on three areas of garden design: Advice on watering a garden; where to place different kinds of sprinklers, what kind of plants need more water, which types of sprinklers are suited to different parts of the garden and how sprinklers should be connected to what type of water source. The presence and influence of sunlight and shadow in the garden; to show which areas of the garden receive most light in different seasons and parts of the day, where to locate a terrace and where to plant different sorts of plant. Advice on the properties of plants and trees; which plants would prosper in different parts of the garden, how many plants flower this month, what plants are edible or poisonous. Domain knowledge and user feedback The first thing we concluded was that it would be crucial to involve different kinds of experts, gardeners and users in the project. We realized we needed to include them from the very start of the project, and to check back with them regularly to decide how to proceed. 3D design The preliminary task was to create a suitable 3D design & drawing application based on our 3D Framework. Our approach is to use a full 3D environment with an top-down view for drawing. This view, a kind of orthographic projection, provides the advantages of 'flat', easy-to-use interaction during drawing & design, while the full 3D experience is just one camera shift away. Drawing in full 3D with a top-down perspective. The same garden, but rotated slightly Watering To be able to give advice about watering a garden we created a 'provided graph' and a 'moisture graph', to store how much water the sprinklers provided, and how much water different parts of the garden required. We used these to create a wizard designed to help the user choose both the right type of sprinker and a water source, and to connect these two, all in a few easy steps. Provided graph: how much water is provided. Note that the sprinkler on the bottom-right is not connected to a water source. Moisture graph: green areas receive enough water, red areas too little or possibly too much (if they do not require water, like the terrace). Sunlight and shadows The first thing here was to create a 'heat map' to represent the areas of the garden with the most sun exposure. To do this, we created a realistic simulation of the sun in the 3D representation of the garden. The result of the 3D simulation, the 'heat map', is used later on when the user adds plants to the garden design: we can quickly look up the amount of sunlight at the precise spot the plant is placed. Apart from the heat map we also created a nice user-friendly interface which allows users to instantly see which parts of the garden receive the most sun in a selected season and hour of the day. 3D sunlight simulation Side note: during the project we also looked into artificial light sources. If you are a developer you might be interested in our findings about Point light shadows in Three.js. Three.js is the javascript 3D library most widely used to create 3D web apps with WebGL. Plant advice Giving intelligent advice about which plants to use in your garden, and where best to place them, was arguably the most challenging part of the project. Especially since unwanted advice can very easily irritate a user a great deal. Based on feedback from our focus groups and surveys we decided to adopt an approach inspired by knowledge-based systems. We created a modular system of multiple 'Advisors', all of which provide advice or warnings based on a simple rule. For instance, we created a GrowthAdvisor based on the simple rule that fast-growing plants should not be placed too close together. This way we could easily create many more advisors based on all sorts of simple rules. An early mockup of the advice UI In addition to our conceptual and technical efforts we also put a lot of thought into UI approaches, i.e. how to best present the most relevant advice. We hope to come back to this in a future blog. Conclusions Creating a design application is not an easy task. Creating a design application in which users receive meaningful advice during the design process is, well, a major challenge. The grant gave us the opportunity to seriously engage these challenges. Looking back we are very happy with the results. The translation from data (i.e. plant properties as well as data generated from simulations) to guidance, by means of advisors, turned out to be both feasible and elegant. Looking at other domains, we feel this approach is applicable to design apps and product configurators in a great variety of fields. Don't hesitate to contact us if you want to explore the possibilities. We look forward to working on a challenging project like this one in the future. Thanks We would like to thank all the participants of this project, in particular: The province of Gelderland, for making the project possible Teken je tuin, our partner for this project All participants from the focus groups and test groups The users of Draw Your Garden for providing us with useful feedback
/assets/img/blog/teapots-json-collada.jpg
Gepubliceerd op

11-08-2015

Categorie

tools

Three.js Collada to JSON converter

The Collada format is the most commonly used format for 3D models in Three.js. However, the Collada format is an interchange format, not a delivery format. Interchange vs. delivery Where a delivery format should be as small as possible and optimized for parsing by the receiving end, an interchange format doesn't have such requirements, it should just make the exchange of models between 3D authoring tools painless. Because Collada is XML it is rather verbose. And to parse a Collada, Three.js has to loop over every node of the tree and convert it to a Three.js 3D object. Three.js' JSON format For improved delivery we first looked at glTF. Unfortunately it wasn't without flaws in our implementations. Next we decided to try Three.js' own JSON format for delivery. JSON is less verbose and because it is Three.js' own format, parsing is done in a breeze. After some fruitless experiments with Maya's Three.js JSON exporter and some existing Collada to JSON converters, we tried our luck with Three.js' built in toJSON() method. Every 3D object inherits the toJSON() method from the class Object3D, so you can convert a loaded Collada model to JSON and then save it to disk. We wanted to wrap this idea into a Nodejs app but the ColladaLoader for Three.js depends on the DOMParser, and there is not yet an adequate equivalent for this in Nodejs. Three.js JSON converter So we made an online converter. There are 2 versions; a preview version that shows the model as Collada and as JSON, and a 'headless' version that just converts the Collada. The first version is suitable if you want to convert only a few models and check the models side by side for possible conversion errors, a Collada to JSON preview. If you want to convert a large number of Colladas you'd better use the second version, a headless Collada to JSON headless. All great teapots are alike How it works First the Collada gets parsed by the DOMParser to search for textures. This is necessary because Three.js' toJSON() method does not include textures in the resulting JSON object. We add the images of all found textures to the THREE.Cache object. By doing so we suppress error messages generated by the Collada loader. Then we use the parse() method of the ColladaLoader to parse the Collada model into a Three.js Group, and because a Group inherits from Object3D we can convert it to JSON right away. The last step is to add the texture images to the JSON file and save the result as a Blob using URL.createObjectURL. All done! Code and links Collada to JSON converter preview version Code on Github preview version Collada to JSON converter headless version Code on Github headless version Boris Ignjatovic, our preferred 3D artist. Thanks for helping us find the best workflow!
/assets/img/blog/light2-shader.png
Gepubliceerd op

02-08-2015

Categorie

Tech

Point Light Shadows In Three.js, part II

While working on [a 3D project] that involved garden lights we stumbled upon unexpected problems with shadows cast from point lights. The shadows were either not there at all or they were just all over the place. It seemed impossible to create a decent light/shadow world (garden in our case). After recovering from the initial shock and disappointment we started investigating the problem. This should be doable. As it turns out, it sort of is. Read on. In the [previous blog post] about this topic we talked about possible ways to implement an efficient way of calculating point light shadows in [Three.js], the javascript 3D library. 'Efficient' meaning: taking fewer texture samplers than the naive approach which takes six samplers for each point light, one for each of the principal directions. We came up with three potential solutions: Divide a larger texture into smaller viewports and draw the depth map to each of these viewports Render each of the depth maps to one side of a cube texture map Use dual-hyperboloid shadow mapping We will first discuss the three approaches and finish up with the results of the most successful one. Possible solutions The first approach - dividing a larger texture into smaller viewports - proved to be difficult to integrate with the current shader and uniform variables. The GLSL shader language requires all of the for loops to be unrollable, as the shaders can then be divided into small chunks of work for each of the stream processors of the GPU. When handling an assortment of textures, some of which have this subdivision of smaller textures, it is a pretty complex task to write it in such a way that it is unrollable without making two separate loops for the two types of textures. Which in turn makes maintaining the shader code quite a hassle. Image from the internal rendering in Three.js. The grid is added to clarify what happens. The grid locations correspond with +x, −x, +y, −y, +z and −z axis, going from top left to bottom right. The second approach, the cube texture, was scrapped halfway through development. While the solution seemed obvious and also is used in the industry, it was very hard to debug. Both Firefox' native canvas debugger and Chrome's WebGL debugger, called the [WebGL Inspector], did not render the cube texture properly. We could observe the switching of framebuffers (which are like internal screens to draw on) but they stayed blank while the draw calls proceeded. This means Three.js did not cull them and they should have shown up on the framebuffer. With no way to debug this step and no output it would be ill-advised to continue to develop this method. Image taken from [devmaster.net] explaining shadow mapping using cube maps. The final approach is the dual-paraboloid shadow mapping. This approach takes two textures per point light. The [previous blog post] talked about one, but this proved to be incorrect. This fact would make it less ideal than the other two approaches. On top of that, the implementation is rather complex. If we had complete control over the OpenGL code this could be a solution, but figuring out where to adapt the Three.js code and the shaders would probably turn out to be a struggle. As it would also involve a transformation to paraboloid space it would be really hard to debug. All this would be required for a lesser effect than the other - hopefully more simple - methods, like the larger texture with viewports. Image taken from [gamedevelop.eu] explaining the paraboloid transformation. The most favorable approach In conclusion the best way to make point light shadows work, without going over the texture-sampler limit or spending too much time, is the "large texture with viewports" approach. This means we have to duplicate some code in the shader and implement two loops to do shadow calculation: one calculating shadows for all the spot lights and one for all the point lights. After implementing this strategy we ran into another problem. This time the number of varying variables ([GLSL standard] page 31) in the shaders exceeded the WebGL implementation register limit. This limit in Chrome is fixed at 16. This meant we could only have one point light with shadows, which is even fewer than when we used the naive implementation. In Firefox the limit is higher which results from it being hardware implementation defined. On my - basic - hardware, it works smoothly with two point lights, but the performance starts to suffer when enabling three or more point lights. The result is shown in the video below. The reason for this is that the hardware implementation of the fragment shaders only has a couple of "fast registers". These are actually separate hardware implementations of real registers which allow fast access to the data stored within. If you exceed this hardware limit, values normally stored in these fast registers will be stored in "slow registers". These are implemented by storing them in Video RAM, which is much slower relative to the fast registers. Shadows from point lights in our demo Conclusion Can we use these results for something practical? Yes, in Firefox this demo will run in real-time with a couple of point lights IF your hardware has some extra "fast registers". If you want to use more than a couple point lights, you can still use this implementation to generate screenshots of scenes that give a nice impression of the shadows being cast (in a garden, in a living room etc). For an extensive, real-time solution you will need above average desktop hardware. Consumers using popular devices (smartphones, tablets, laptops) are obviously not part of the target audience. However, practical applications are still to found in - for example - fixed setups like a presentation in an exhibition stand. [GLSL standard]: https://www.khronos.org/files/openglesshadinglanguage.pdf#page=37 [devmaster.net]: http://devmaster.net/p/3002/shader-effects-shadow-mapping [gamedevelop.eu]: http://gamedevelop.eu/en/tutorials/dual-paraboloid-shadow-mapping.htm [WebGL Inspector]: http://benvanik.github.io/WebGL-Inspector/ "WebGL inspector homepage" [Three.js]: http://threejs.org/ "three.js homepage" [previous blog post]: /blog/15/point-light-shadows-in-threejs
/assets/img/blog/skauti.png
Gepubliceerd op

15-07-2015

Categorie

Tech

3D data visualization

At tweede golf, we value innovation: we take the time to research new technologies and subsequently challenge ourselves to try out these new techniques in order to discover new applications. We also like to learn by doing: build something first, ask questions later. Following that philosophy, we recently held a programming contest. We gave ourselves two days to create new applications on top of our existing 3D framework. One of the teams created an app they dubbed "Skauti". It uses a 3D representation to visualize datasets. Now is the time to look back: what are the benefits of 3D data visualization? Read on and find out. The Skauti prototype We all like to base our decisions on data. However, just having a data sheet containing a lot of numbers will often not help you much, especially when the data set is very large. Graphs and visualizations are typcially used to obtain a better understanding of data. When it comes to presenting geospatial data (i.e. data which is dependent on some location) a 2D map is often the preferred solution. Making sense of data For centuries, cartographers (map-makers) have used projections and symbolism to create a 2D interpretation of the actual world. Nowadays, 2D maps are still widely used, as are traditional tools like adding colour, using markers and higlighting areas to indicate special points of interest with the goal of providing as much insight into the data as possible. 2D vs 3D A more detailed look at the map above (which is a visualization of city sizes in Devon, England) reveals the limitations of 2D visualizations. Did you notice the tiny 35 (Topsham) under Exeter? And which city is bigger: Torquay or Paignton? Their bubbles overlap, making it harder to identify their sizes. How to solve these problems? If we could use the third dimension as well, we could use a height instead of the circle radius to indicate city sizes. In addtion, we can simply choose a radius which makes sure there are no overlaps, making it easier to interpret the data. What about the use of colour? Sometimes color is used to indicate some value (for example in a heatmap). But what does green signify and how should we interpret red? If we compare the two maps of Mount Taranaki in New Zealand we can find below (one a traditional 2D height map, one a 3D representation), it is immediately apparent that the 3D version gives us more detailed information and it presents us with a more intuitive understanding of the mountain. © CC BY-SA Koordinates © CC BY-SA 3D visualizations can be run in your webbrowser using WebGL with vector based approaches instead of the pixel based tile maps often used for 2D cartography on the web. Not only does this look great, it has some more advantages: interaction becomes easier to achieve and scrolling and zooming can be made into a more smooth experience for the user. The Skauti prototype We wanted to make the most of these advantages. We set ourselves the challenge to create a small prototype of a 3D map. In this prototype we took our own city, Nijmegen, and we used building data provided by the Dutch government, specifically the [BAG] and [AHN2] datasets, to determine where buildings are and how tall they are. We used these datasets before to create fancy point cloud visualizations. We then picked some houses in our neighbourhood from Funda (a Dutch website listing properties for sale and for rent) and assigned them a color based on whether they were for sale or for rent. We also built a quick animation that gives the user access to more detailed information at the top of the screen. 3D data visualization (dubbed "Skauti") Above you can see what typical user interaction in this prototype is like. Given the fact it only took us the time span of a two-day programming contest to make this prototype, we can only imagine what can be achieved using this technique. If you see an application that is useful to you, do not hesitate to contact us. [point cloud visualizations]: /#portfolio-planviewer-3d [BAG]: https://data.overheid.nl/data/dataset/basisregistratie-adressen-en-gebouwen-bag-
/images/Screen Shot 2017-12-12 at 21.00.23.png
Gepubliceerd op

21-05-2015

Categorie

technology

Point light shadows in Three.js

For a research and development project we created a small garden environment in which you can place lights. The objective was to visualise what your garden would look like during the night, beautifully lit according to your personal light design. Of course objects in your garden cast shadows and influence the look and feel of the lighting: we needed to include shadow casting in our demos. That seemed doable, but it turned out the WebGL framework we use for our WebGL development, Three.js, only supports shadow casting for spot lights. Unfortunately not all lights in our gardens are spot lights... We needed to find a way to cast shadows from point lights in Three.js. Screen from the garden prototype. Spot light support only... The first step was understanding how shadow casting works for spot lights and understanding why this would not work for point lights. The process of shadow casting is quite elegant. It takes an extra render pass one comparison per object and light to determine if a fragment should be shaded. The extra render pass is rendering the distance of an object to every light to a separate texture. This is called the depth pass. You can do this by applying the inverse View matrix, which represents the position and rotation of the camera, to the current object and applying the model matrix of the current light you want to calculate shadows for. The above picture shows a color representation of the depth value. Each color corresponds with a 32 bit integer indicating where its z-position lies between Z-Near and Z-Far. So a the maximum value of the 32 bit integer would correspond with Z-Far and the 0 value would correspond with Z-Near. Now you can check for every pixel if it is in the shade of a certain light as follows. Take the world coordinate of the fragment you are processing right now. This information can be passed by the vertex shader and interpolated in the fragment shader. Transform it into light space: mimic looking at that spot from the position of that light. This is achieved by passing the transformation matrix of the light you are processing to the fragment shader. Then transform it to a pixel coordinate by applying the perspective transformation of the corresponding "shadow camera". This determines what area is shaded by this light. The next step is to calculate the distance between the point and the light, if this is greater than the distance stored in the depth pass texture (z-buffer texture) the pixel is shaded by this lamp, if it's closer it is illuminated. But this only works well for spot lights because of the how perspective works in OpenGL. Perspective mapping in OpenGL works with frustums, which is a pyramid with the top cut off. The new plane that is created by removing the top of the pyramid is called the near plane and this is what you see on screen. All other pixels are coloured by ray tracing to the far plane from the eye through the pixel you want to color to the far plane (which is the bottom of the pyramid). This works for small angles, but the maximum angle you can approach is 180 degrees. At which your near plane will be very small and very close to the camera position, which will cause weird distortions in the rendering. For point lights we would need a 360 degree field of view as it is called, and this is simply impossible. So what do we do? Instead of doing 1 depth pass we do 6. One for every unit direction of the space we are in. so one in +x, -x, +y, -y, +z and -z direction. All with a horizontal and vertical field of view of 90 degrees. This covers the whole space. But where do we render them to? We can't use 1 texture for all of them, as they would simply override the previous depth pass. There are four solutions: Render them to 6 different textures. Which takes up 6 of the (in our case) 16 texture samplers available to us in the webgl fragment shader pipeline. Render them to one big texture with smaller viewports. Lowers the maximum possible shadow resolution and complicates shaders as you need to map shadow camera matrices to the right parts of the texture. Use a cube map which is a 3D texture that has 6 2D textures corresponding to each side of a cube creating a shadow cube. Only uses one sampler but has the same shader complexity of mapping a shadow perspective to the right cube side. Use dual paraboloid shadow mapping, which does allow us to use 1 texture for point lights but still needs the 6 render passes we talked about. Another downside of this technique is that there are slight distortions, but all in all it should look nice enough. These implementations are all candidate implementations for the final prototype. They haven’t been implemented yet except for the first one, which makes 6 seperate textures per point light. The result of that implementation is shown below. Update: we've written a follow-up article "Point light shadows in Three.js, part II". Test environment for point light shadows
/assets/img/blog/floor-and-arrow.png
Gepubliceerd op

17-04-2015

Categorie

development

Threejs rotations

In this post we create a first person 3D setting, and we use rotations to accomplish this. In Threejs you create a scene like so: ~ let scene = new THREE.Scene(); ~ This is the root scene; all other 3D objects have to be added to this root scene to make them visible: ~ let obj3D = new THREE.Object3D(); // create a 3D object scene.add(obj3D); ~ The coordinate system of Threejs is a right handed system, which means that the positive z-axis is pointing towards you: In the picture below you see a Threejs scene. The red line is the x-axis, the green line the y-axis and the blue line the z-axis. A dotted line indicates a negative axis. The black arrow in the yellow square is an instance of THREE.PlaneBufferGeometry which is a subclass of THREE.Object3D, the basic 3D object in Threejs. On this PlaneBufferGeometry a texture of an arrow has been mapped. The PlaneBufferGeometry has been added to the root scene without any rotation or translation. What we can learn from this picture, is that a 3D object without translation and rotation gets added to the origin of the scene. And as far as rotation is concerned: a 3D object that has no rotation on any of the three axes stands perpendicular to our line of sight and has its upside in the direction of the positive y-axis. Creating a floor If we want to create a floor, or a ground for our 3D scene we have to rotate a plane -90° over the x-axis, play the following video to see how that works out: If you rotate a 3D object in Threejs you only change its rotation in relation to the root scene: its own coordinate system is not affected. In the video above the positive y-axis of the PlaneBufferGeometry gets aligned with negative z-axis of the root scene. What we could do as well, is to apply the rotation to the root scene as a whole; in that case the axes of the floor and the root scene stay aligned with each other: Both solutions are equally valid, but there is one caveat: if you choose to rotate the root scene, please make sure that you do not add the camera to the scene because that would cancel out the rotations, see this post. Moving over a floor Now lets create a proper floor and add the arrow object to the floor: Next we want to move the arrow object one unit into the direction the arrow head is pointing. We use trigonometry to calculate the fraction of the unit the arrow object has to move over the x-axis and the fraction of the unit the arrow object has to move over the y-axis based on its rotation over the z-axis: ~ arrow.position.x += unit * Math.cos(arrow.rotation.z); arrow.position.y += unit * Math.sin(arrow.rotation.z); ~ Because the z-rotation of the arrow object is 0° this boils down to: ~ Math.cos(0) = 1; Math.sin(0) = 0; arrow.position.x += unit; arrow.position.y += 0; ~ The translation of the arrow object on the x-axis is 1 unit, and the translation on the y-axis is 0, which means that the arrow object is moving over the red line (the x-axis) to the right instead of over the green line (the y-axis) away from us. This is rather counter intuitive, and it is the result of the fact that in Threejs the top of a 0° rotated 3D object is in the direction of the positive y-axis, which is a very understandable decision because the y-axis is usually the vertical/upright axis. We can fix this by rotating the floor or the root scene 90° over the z-axis. Lets move the root scene so the axes of the floor stay aligned with the axes of the root scene: The arrow object is moving away from us but the head of the arrow points a the wrong direction. The rotation of the arrow object is still is 0°, but the texture on the arrow object (the PlaneBufferGeometry instance) makes us believe that the arrow object has a rotation of -90°. We fix this in the arrow object itself by rotating the texture 90° which makes the direction of the arrow head consistent with the rotation of the PlaneBufferGeometry that it is applied to. Conclusion If we rotate the root scene (or the floor) -90° over the x-axis, the y-axis becomes the 'away into the distance' axis, the natural axis that we want to move along when moving straight forward. But because the natural angle of a straight forward movement is 0°, the x-axis actually is the most natural axis for moving forward, so we rotate the root scene (or the floor) 90° over the z-axis as well to swap the x and the y-axis. Now we have created the ideal situation for a first person setting. You can play yourself with the final result. Code is available at GitHub.
/assets/img/blog/threejs-barrel-distortion.jpg
Gepubliceerd op

03-04-2015

Categorie

development

WebVR and Three.js

Let's start simple. Our first WebVR application is a big cube in Threejs and a simple 3D scene floating inside that cube. The 3D scene consists of a transparent floor with a few simple rectangular shapes placed on it. On each side of the cube we print the name and direction of the axis towards which the side is facing. Lets call this cube the "orientation cube", and lets call the 3D scene "the world" because that is what it is from the user's perspective. Both the orientation cube and the world are directly added to the root scene, which is the scene you create with the code rootScene = new THREE.Scene(). When wearing an Oculus, you are positioned in the middle of this orientation cube and you can look to all sides of the cube by moving your head. You can move around in the world with the arrow keys of your keyboard. The API To get the rotation and position data of the Oculus using javascript, we first query for VR devices: ~javascript if(navigator.getVRDevices){ // getVRDevices returns a promise navigator.getVRDevices().then( // on fulfilled callback returns an array containing all detected VR devices function onFulFilled(data){ detectedVRDevices = data; onFulfilled(deviceData); } ); } ~ The detected VR devices can be instances of PositionSensorVRDevice or instances of HMDVRDevice. PositionSensorVRDevice instances are objects that contain data about rotation, position and velocity of movement of the headset. HMDVRDevice instances are objects that contain information such as the distance between the lenses, the distance between the lenses and the displays, the resolution of the displays and so on. This information is needed for the browser to render the scene in stereo with barrel distortion, like so: To get the rotation and position data from the PositionSensorVRDevice we need to call its getState() method as frequently as we want to update the scene. ~javascript function vrRenderLoop(){ let state = vrInput.getState(); let orientation = state.orientation; let position = state.position; if(orientation !== null){ // do something with the orientation, // for instance rotate the camera accordingly } if(position !== null){ // do something with the position, // for instance adjust the distance between the camera and the scene } // render the scene render(); // get the new state as soon as possible requestAnimationFrame(vrRenderLoop); } ~ Putting it together For our first application we only use the orientation data of the Oculus. We use this data to set the rotation of the camera which is rather straightforward: ~ let state = vrInput.getState(); camera.quaternion.copy(state.orientation); ~ Usually when you want to walk around in a 3D world as a First Person you move and rotate the camera in the desired direction, but in this case this is not possible because the camera's rotation is controlled by the Oculus. Instead we do the reverse; keeping the camera at a fixed position while moving and rotating the world. To get this to work properly, we add an extra pivot to our root scene and we add the world as a child to the pivot: ~ camera root scene ↳ orientation cube ↳ pivot ↳ world ~ The camera (the user) stays fixed at the same position as the pivot, but it can rotate independently of the pivot. This happens if you rotate your head while wearing the Oculus. If we want to rotate we world, we rotate the pivot. If we want to move forward in the world, we move the world backwards over the pivot, see this video: You can try it yourself with the live version; the arrow keys up and down control the translation of the world and the arrow keys left and right the rotation of the pivot. The source code is available at GitHub. According to Threejs, the arrow in the picture above has a rotation of 0° on the z-axis (and on the other 2 axes as well for that matter). However in trigonometry a 0° rotation over the z-axis is a vector in the direction of the positive x-axis, so the real rotation of the arrow in the picture is 90°. If we want to move the arrow one unit in the direction towards which the arrow is rotated, we use trigonometry to calculate the fraction of the unit the arrow has to move over the x-axis and over the y-axis, therefor we have to compensate for Threejs' unorthodox reading of a rotation of 0°: ~ arrow.position.x += unit * Math.cos(arrow.rotation.z + Math.PI/2); arrow.position.y += unit * Math.sin(arrow.rotation.z + Math.PI/2); ~ Another thing that we can learn from the image above is that in order to make a ground for our 3D scene, we need to rotate a the arrow by -90° over the x-axis: Instead of rotating the arrow to make a floor, you could also choose to rotate the whole root scene. And you could perform a rotation over the z-axis at the same time to compensate for the fact that 0° in Threejs is actually a rotation of 90°: ~ scene.rotation.x -= Math.PI/2 scene.rotation.z += Math.PI/2 ~ If you choose to rotate the scene, please make sure that you do not add the camera to the scene, see next section. --> About the camera in Threejs The camera in Threejs is on the same hierarchical level as the root scene by default. Which is like a cameraman who is filming a play on a stage while standing in the audience; theoretically both the stage and the cameraman can move, independently of each other. If you add the camera to the root scene then it is like the cameraman stands on the stage while filming the play; if you move the stage, the cameraman will move as well. You can also add a camera to any 3D object inside the root scene. This is like the cameraman standing on a cart on the stage while filming the play; the cameraman can move independently of the stage, but if the stage moves the cameraman and her cart will move as well. In our application the camera is fully controlled by the Oculus, so the first scenario is the best option. This is comes in handy, since we have applied rotations to the root scene (see in this post). As a consequence, if we add the camera to the scene, the rotations of the scene will have no effect. Here is an example of a situation whereby the scene rotates while the camera is added to that same scene: Note that in most Threejs examples you find online it does not make any difference whether or not the camera is added to the root scene, but in our case it is very important. The result We have made two screencasts of the result from the output rendered to the Oculus: -->
/assets/img/blog/osmose-wired.jpg
Gepubliceerd op

04-03-2015

Categorie

Tech

VR, HMI and HCI

The interaction between a human and a computer, also called human-machine interaction (HMI) or human-computer interaction (HCI) has changed quite a lot in the past decades. Virtual reality (VR) and augmented reality (AR) have received a revived interest due to the development of devices like the Oculus Rift and Microsoft's Hololens. This considered, HCI will probably change even more radically in the coming years. Short history HCI has been a topic of active research for decades; researchers and artists have invented the most exotic technologies, for instance, Char Davies' art project Osmose whereby the user can navigate by breathing and moving her body. The vest is used to measure the breathing of the user Obviously, not every invention made it to the consumer market, but most technologies we use today have been invented long before they became mainstream. There are for instance striking similarities between Google Glass and EyeTap developed by Steve Mann in the 1980's and 1990's. Development of the EyeTap since 1980 We have come a long way since the interaction with punched cards in the early days. In the 1960's the user interaction happened mostly via the command-line interface (CLI) and although the mouse was already invented in 1964, it became only mainstream with the advent of the graphical user interface (GUI) in the early 1980's. GUI's also made it more apparent that HCI is actually a two-way communication; the computer receives its input via the GUI and also gives back the output or the feedback via the GUI. First mouse as invented by Douglas Engelbart NUI and gestures Speech control became consumer-ready in the 1990's (though very expensive back then). Interesting about speech control is that it is the first appearance of a Natural User Interaction (NUI). NUI roughly means that the interface is so natural that the user hardly notices it. Another example of NUI is touchscreen interaction, though we have to distinguish between using touch events as replacement for mouse clicks, such as tapping on button element in the GUI, and gestures, for instance a pinch gesture to scale a picture. The latter is NUI, the former is a touch-controlled GUI. Instead of making gestures on a touch screen, you can also perform them in the air in front of a camera or a controller such as the Leap Motion. Gestures can also be made while wearing a data glove Interaction with brainwaves Wearables such as smart watches are usually a mix between a remote controller and an extra monitor of a mobile device. As a remote controller you can send instructions like on a regular touchscreen, but for instance the Apple Watch has a classic rotary button for interaction as well. Wearables can also communicate other types of data coming passively from a human to the computer, like heart rate, skin temperature, blood oxygen and probably a lot more to come when more types of sensors become smaller and cheaper. Google Glass is a wearable that can be controlled by voice and by brainwaves. By using a telekinetic headband that has sensors for different areas of the brain, brainwaves are turned from passive data into an actuator. Fields of application are typically medical aids for people with a handicap. Showing a headband with 3 sensors on the skull and one that clips onto the user's ear AR and VR With AR a digital overlay is superimposed on the real world whereas with VR the real world is completely replaced by a virtual (3D) world. Google Glass and Hololens are examples of AR devices. The Oculus Rift and Google Cardboard are examples of VR devices. Google Glass renders a small display in front of your right eye and the position of this display in relation to your eye doesn't change if you move your head. Hololens on the other hand actually 'reads' the objects in the real world and is able to render digital layers on top of these objects. If you move your head, you'll see both the real world object and the rendered layer from a different angle. Hololens rendering interfaces on real world objects AR is very suitable for creating a Reality User Interface (RUI), also called a Reality Based Interface (RBI). In a RBI real world objects become actuators; for instance, a light switch becomes a button that can be triggered with a certain gesture. An older and more familiar example of RBI is when a 3D scene is rendered on top of a marker; when you rotate the marker in the real world, the 3D scene will rotate accordingly. Instead of a marker you can also use other real world entities, for instance, Layar makes use of the GPS data of a mobile device. VR is commonly used for immersive experiences such as games, but it can also be used to experience historical or future scenes like building that have been designed but haven't been built yet. An example of a RBI: a marker is used to control a 3D scene Researching VR for web We will be looking at two VR devices in the near future: the Oculus Rift and Google Cardboard. In the coming blog posts we will share the results with you. Links: NUI Wearables Osmose Multitouch The video is made in 2006: note how enthusiastic the audience is about multi touch control, nowadays multi touch control is part of our daily life. Brainwaves First mouse Hololens
/assets/img/blog/google-cardboard2.jpg
Gepubliceerd op

21-02-2015

Categorie

Tech

Virtual reality and the web

Nowadays most VR applications are native games that are developed with tools like Unity and Unreal. These games have to be downloaded from the regular app stores, or from other app stores that have been set up by manufacturers of virtual reality headsets, like the Samsungs Gear VR app store. The biggest benefit of native applications is their unbeatable performance, which is crucial for games. However, you can use VR for other purposes as well. For instance, you can add VR to panorama viewers to make them more immersive. Likewise, you could build 3D scenes that are architectural or historical recreations of buildings that you can enter and walk around in with your VR headset. These kind of applications are relatively easy to develop using web technologies. Panorama viewer by Emanuele Feronato The benefits of developing using open web technologies are obvious; you can publish your content instantly without gate keepers (app stores), you can use your own cheap or free tools, there is a culture of collaboration in the web developers' community, and so on. Both Mozilla and Google saw the potential of VR on the web and started to develop an API that provides access to VR devices. Currently only the Oculus Rift is supported, which will probably change as soon as new devices hit the market. Mozilla and Google are working on one and the same API for WebVR, unlike what happened in the past with the development of the WebAudio API. Mozilla has also implemented WebVR in the nightly build of Opera. It is not yet known whether Spartan, Microsoft's new browser for Windows 10, is going to support WebVR. However, it probably is going to support WebVR, since so far Spartan has made a show of virtue as it comes to new browsers standards. Google also created an open source hardware VR device, the Google Cardboard. This is a device made of cardboard that turns a mobile device into a standalone VR headset. The mobile device's gyroscope, accelerometer and magnetometer are used to track the rotation and position, and the 3D content is rendered by the device itself. The Google Cardboard combined with the WebVR API and web technologies for generating the 3D scene makes creating VR application achievable for a large audience. The WebVR API is able to detect a connected VR device, or if the browser is running on a device that can be used as a standalone VR device such as a mobile phone or a tablet. A single physical VR device shows up as a HMDVRDevice object and as a PositionSensorVRDevice object, but both objects share the same hardware id so you know they are linked. The first object contains information related to the display and the lenses such as the resolution, the distance between the lenses and the distance from your eyes to the lenses. The latter object contains information about the position, rotation and velocity of movement of the device. To create the 3D content you can use a myriad of javascript 3D libraries, but Threejs is by far the most popular and easiest to use library. At Tweede Golf we continually check other libraries but so far we have stuck with Threejs. What's more, Threejs already supports VR devices; there are controllers that relay the tracking data from the sensors available, and renderers that do the stereo rendering for you. Now that WebGL has landed in all browsers across all operating systems, both mobile and desktop, the biggest hurdle for rendering 3D content in a browser is taken away. VR opens great opportunities to change the way we experience the web. For instance, Mozilla is experimenting to render existing web pages with CSS3 and WebGL for VR devices. In the next blog post we show you our first test with WebVR. Links: The Current Status of Browser-based Virtual Reality in HTML5 A series of videos shot a the SFHTML5 meetup about VR and HTML5
/images/G91F075_038F.jpg
Gepubliceerd op

19-01-2015

Categorie

Tech

The history of virtual reality

The history of virtual reality (VR) dates back to the 1950's. Since then, a lot of - sometimes quite exotic - devices have been developed. For instance, take a look at this VR cabinet called "sensorama" developed by Morton Heilig in 1962: Nowadays, most VR devices take the form of head mounted devices (HMD). Probably the best known example of such a device is the Oculus Rift. The device looks a bit like safety goggles. Let's dive into some technical details of the Oculus Rift. The Oculus Rift Developer Kit 2 and positional tracker Displays and lenses For each eye the Oculus has a full hd display on which the 3D content (for instance a game or a video) is rendered. The content has to be rendered in stereo which means that the image for the left display is taken from a slightly different angle compared to the image on the right display. This difference is analogous to the distance between our two eyes. Example of an early stereo image This shows this different camera positions of the photo We look at the image through a set of specially shaped lenses; these lenses distort the image in such a way that the field of view (FOV) becomes larger than the actual size of the displays in the Oculus. In the image below the letter X (in the red box) indicates the size of the real screen, the letter X' (X-prime) is the size of the screen you think you see because you look through the lenses: The distortion of the image caused by the lenses is called pinch distortion and looks like this: To cancel out the pinch distortion, the image is rendered with barrel distortion which looks like this: The netto result of the pinch distortion of the lenses and the barrel distortion of the image is that you see a straight image that is bigger than the screen size of the Oculus. As you can see in the image, a side effect of barrel distortion is that the image is stretched out towards the edges. This means that the pixel density is less in the outer regions of the image. This is not a problem, because it is much like how our own vision works in real life: the objects we see in our peripheral vision are not as sharp as the objects we see right in front of us. Shown in the image below: the red cone is the FOV that we can really focus on, and objects in the green and blue cones are increasingly more blurry. Tracking rotation, movement and position The Oculus has sensors that track rotation and the velocity of your movements; in the device you find a gyroscope, an accelerometer and a magnetometer. Furthermore, the Oculus has 40 leds that are tracked by the separate positional tracker device. This device looks a bit like a webcam and ideally you mount it on top of your computer monitor. The data coming from all sensors and trackers gets combined in a process called sensor fusion. Sensor fusion roughly means that you combine data coming from different sources to calculate data that is more accurate than the data that comes from each individual source. Generating the 3D scene The Oculus has to be connected to a computer; an HDMI cable for the displays and a USB cable that attaches the connector box. The connector box receives both a cable from the positional tracker and from the HMD itself. Of all the data from the sensors are combined to create a 3D scene that is in accordance with the position and movement of your head and your body, which makes you feel like you are actually standing inside that scene. Because the Oculus Rift blocks your vision on the real world and the fact that you are connected to a computer like a goat tied to a pole, it makes it quite hard - if not dangerous - to walk around while wearing an Oculus. Therefore, other devices have been developed that transfer physically walking movements to the computer as well, see images below. On the other hand, it is very likely that in the near future the on-board processor of the Oculus will be fast enough to render the 3D content and thus the Oculus Rift would become a standalone device, like Microsoft's Hololens. This device (currently on Kickstarter) takes it even further: Other devices Besides the Oculus Rift there are numerous other companies that have made or announced HMD's for VR. You can roughly divide them into three categories: 1) devices that have to be connected to a computer, 2) devices that work with a mobile phone and 3) standalone devices. The Oculus is of the first category; it needs a computer for rendering the content. On the one hand the HMD is an extra monitor to your computer, and on the other hand it is an input device that tracks your movements. In the future the connection between the HMD and the computer will probably become wireless. Googles Cardboard is an example of the second category, the phone's gyroscope, accelerometer and magnetometer are used to track the rotation and position, and the 3D content is rendered by the phone itself. Microsoft's Hololens is of the third category. With the increasing power of mobile processors and co-processors for rendering and motion, we will probably see more devices of this type in the future. Advantage of the first category is that you have more processing power for rendering the 3D content, advantage of the second category is that you are not tied by wires to your computer and that it is a relatively cheap solution, provided that you already own a smartphone with decent processing power. The third category combines the advantages of the first two categories. Links: Barrell distortion Nvidia standalone HMD Oculus Rift teardown

Auteurs

Daniel

Researcher


daniel@tweedegolf.com

Ruben

Senior developer


ruben@tweedegolf.com

Bram

Senior developer


bram@tweedegolf.com

Marlon

Lead developer


marlon@tweedegolf.com

Erik

Scrum master


erik@tweedegolf.com

Hugo

Scrum master


hugo@tweedegolf.com

tweede golf

Contact

  • 024 - 30 10 484
  • info@tweedegolf.nl

Adres

  • Castellastraat 26
  • 6512 EX Nijmegen