Blog

advanced-UIs

Server-side rendering voor React web apps

Daniel
Daniel

In de begindagen van het internet was server-side rendering van HTML-pagina’s de enige optie. De wereld was eenvoudig: bij iedere klik op een link werd een compleet nieuwe pagina opgehaald van de server en getoond in de browser. Naarmate de kracht van javascript toenam, ontstond de mogelijkheid om pagina's ook (al dan niet gedeeltelijk) in de browser te renderen. Door de voordelen van client-side rendering (zie onder) en het feit dat webpagina's steeds meer volledige, interactieve applicaties zijn geworden, zijn er in de afgelopen jaren frameworks ontstaan die client-side rendering makkelijk en efficiënt maken, zoals React, Angular en Vue. Het grote nadeel van client-side rendering is dat de content minder makkelijk gevonden wordt door zoekmachines. Daar staat tegenover dat zoekmachines zich hebben aangepast aan het feit dat er steeds meer CSR-sites zijn bijgekomen. Sommige zoekmachines voeren bijvoorbeeld de javascript uit op pagina's van sites die veel bezoekers hebben, en de Google crawler indexeert tegenwoordig tot op zekere hoogte React-componenten (zie de links onderaan de pagina's). Voordat we ingaan op de manier waarop wij server-side rendering voor React web apps gebruiken, eerst nog eens de voor- en nadelen van server- en client-side rendering op een rijtje: Server-side rendering (ssr) Voordeel: pagina's zijn indexeerbaar voor zoekmachines Voordeel: snelle laadtijd eerste pagina Nadeel: veel contact (en dataverkeer) met de server en daardoor trager, want bij ieder request wordt de hele pagina opgehaald Nadeel: minder controle over de transities tussen pagina’s, zoals animaties Client-side rendering (csr) Voordeel: na de eerste pagina laden de daarop volgende pagina's snel Voordeel: minder server verkeer Voordeel: paginaovergangen kunnen geanimeerd worden Voordeel: pagina's kunnen gedeeltelijk gere-rendered worden (bijvoorbeeld: er wordt een inlogformulier aan de pagina toegevoegd) Nadeel: pagina’s zijn niet ‘out of the box’ indexeerbaar door zoekmachines Nadeel: renderen van de eerste pagina duurt langer omdat eerst alle javascript ingeladen moet worden Best of both worlds Bij Tweede golf bouwen we vaak React webapplicaties waarvoor met name indexeerbaarheid een must is en daarom ssr noodzakelijk. We passen dan de volgende, conceptueel eenvoudige, combinatie van beide render methodes toe: de eerste pagina van de site of applicatie wordt server-side gerenderd en alle volgende pagina's client-side. Omdat iedere pagina van een site of applicatie de eerste pagina kan zijn worden alle pagina's dus door zoekmachines geïndexeerd. Een bestaande React-app kan heel eenvoudig omgebouwd worden naar ssr door gebruik te maken van een speciale methode genaamd renderToString. Hiermee wordt het root component van een React-app (of component) omgezet naar een kant-en-klare HTML string die je vervolgens in een HTML-pagina kunt plakken en door een webserver geserveerd kan worden. React renderen op de server Omdat React een javascript module is heeft de bovengenoemde methode renderToString, een javascript runtime op de server nodig. Alhoewel er libraries zijn waarmee je met een extensie React kunt renderen met php is deze methode niet aan te raden omdat deze libraries vaak traag zijn, nog experimenteel zijn of niet meer worden onderhouden. Wij gebruiken daarom Nodejs met een http-server zoals Express of Koa. Deze server draait via een proxy achter de webserver. Theoretisch zou je ook de Nodejs-server als public-facing webserver kunnen gebruiken, maar wij kiezen liever een volwassen webserver zoals nginx die uitgebreide configuratiemogelijkheden heeft voor https, compressie en caching. Daarnaast is het zo dat nginx veel sneller is in het serveren van statische assets zoals plaatjes, stylesheets, fonts en javascripts. De Nodejs-server serveert dus alleen een HTML-pagina met daarin de React-app en de referenties naar de statische assets die zoals gezegd door nginx geserveerd worden. Als React op de client gerenderd wordt, krijgt de app een HTML-element op een pagina toegewezen, waarbinnen React de DOM-tree kan manipuleren. Deze HTML-pagina kan een statische pagina zijn of een bijvoorbeeld door PHP gegenereerde dynamische pagina. Bij server-side rendering renderen we zowel de React app als de HTML-pagina; op deze manier kunnen we ook in de HTML dynamische data schrijven zoals metatags die uit de database komen. State Rehydration op de client Doordat de pagina op de server gerenderd is, is het feitelijk een statische pagina geworden. Om de navolgende pagina's weer op de client te kunnen renderen moeten we de state rehydration uitvoeren. De javascript code die dit doet zetten we helemaal onderaan in de pagina net voor de closing body tag; hierdoor zie je eerst de hele pagina, vervolgens wordt de javascript ingeladen en ten slotte voeren we de state rehydration uit. Rehydration is het proces waarbij je de client-side state afleidt (extraheert) uit de server-side gerenderde markup. Als je dit goed implementeert, triggert het hydrateren van de state geen nieuwe client-side render cycle. Tijdens het rehydrateren voegt React onder andere eventlisteners toe. Als Redux of een andere state management library wordt gebruikt is het nodig om de initiële state door te geven aan de javascript runtime, bijvoorbeeld via een globale variabele. Meer lezen Client-side rendering vs. server-side rendering. Nieuwe features server-side rendering in React 16. Wel of geen server-side rendering gebruiken? Simpel voorbeeld van ssr met React (N.B. dit voorbeeld is met React 15). Is ssr noodzakelijk voor SEO? SEO en React sites.

React and Three.js

Daniel
Daniel

In the autumn of 2015, we got to know the popular javascript library React very well, when we used it to create a fun quiz app. Soon the idea arose to research the usage of React in combination with Three.js, the leading javascript library for 3D. We've been using Three.js for some years now in our projects and we expected that using React could improve code quality in 3D projects a lot. Currently, there are two libaries that provide React bindings for Three.js. This post will explore their differences using working examples. We hope it will help you to make up your mind which one to choose. React React has become a popular choice for creating user interfaces. React keeps a virtual DOM and changes in the UI are applied to this virtual DOM first. Then React calculates the minimal set of changes that are needed to update the real DOM to match with the virtual DOM. This process is called reconciliation. Because DOM operations are expensive, the performance benefit of React is substantial. But there is more to React than the performance impact. Especially in combination with Flux, JSX and the debug tools for the browser it is a very powerful and yet easy to use library to create complex UI's with reusable components. Where React ultimately creates html that is rendered by the browser, there is an increasing number of libraries that provide React bindings for libraries that render to the canvas element such as D3.js, Flipboard and Chart.js. There are also bindings for SVG and another interesting experiment is gl-react. React and Three.js For Three.js there are two libraries that provide React bindings: react-three react-three-renderer Three.js keeps a virtual 3D scene in memory which is rendered to the WebGL context of the canvas element every time you call the render method. The render method completely clears the canvas and creates the complete scene anew, even when nothing has changed. Therefor we have nothing to gain performance-wise when using React with Three.js, but there is still plenty reason to use it. React encourages you to create components and move state out of components as much as possible, resulting in cleaner, better to maintain code, and the JSX notation gives you a very clear overview of the hierarchical structure of the components in your 3D scene as we will see in the code examples in the next chapter. Two libraries compared React-three is written in es5, react-three-renderer is newer and written in es6. The following code examples, that both create a simple cube, show us the differences between the libraries. First react-three: import React3 from 'react-three'; let Scene = React3.Scene let Camera = React3.Camera; let AmbientLight = React3.AmbientLight; let Mesh = React3.Mesh; /> And now the same in react-three-renderer: import Scene from 'react-three-renderer' /> /> We see two obvious differences: 1) In react-three we import one object and this object contains all available components. I have given the components the same name as the properties of the imported object, but I could have used any name. The naming convention in React commands us to write custom components starting with an uppercase, which I obied willingly. In react-three-renderer we import one component and the available components are known within this component/tag. This is because react-three-renderer uses internal components, similar to div, span and so on. Note that the names of the components start with lowercases. 2) In react-three the properties geometry and material of the Mesh component are instances of the corresponding Three.js classes whereas in react-three-renderer both the geometry and the material are components as well. React-three has only 17 components, but react-three-renderer strives to create components for every (relevant) Three.js class, thus gaining a higher granularity. Creating components The following example is a Minecraft character configurator that we can use to change the sizes of all the cubes that the character consists of. Minecraft character configurator Screenshot of the Minecraft character configurator It shows you how easy it is to create 3D components with both libraries and how your code benefits from using React both in terms of being well-organised and maintainable. All code is available at github and you can find the live examples here. The code of the main component looks as follows: First we create a section that contains all controls, then we create the scenegraph containing a plane (World) on which the Minecraft character gets placed. As you can see all code specific to the Minecraft character is tucked away in its own component, leaving the hierarchal structure very clear despite its complexity. When we take a look at the code of the Minecraft character component we see how much complexity is actually abstracted away: Here we see a component named Box which is some wrapper code around a cube. By using this component we not only reduce the amount of code in the Minecraft character module, we also abstract away differences between the 2 libraries. This means that we can use the Minecraft character component both in projects that use react-three and in projects that use react-three-renderer. To see the different implementations of the Box component please take a look at the code on github: react-three and react-three-renderer. Importing models The model loaders for Three.js load the various 3D formats (Collada, FBX, Obj, JSON, and so on) and parse them into Three.js objects that can be added to the scene right away. This is very convenient when you use Three.js without React bindings, but it requires an extra conversion step when we do use React bindings because we need to parse the Three.js object into components. I have written some utility code for this which is available at github. You can find two working examples of how to use this code with both libraries in a separate repository at github. The utility is a parser and a loader in one and this is how you use it: let parsedModel = new ParsedModel(); parsedModel.load('path/to/model.json'); After the model is loaded it is parsed right-away. During the parsing step a map containing all geometries is generated. All these geometries are merged into one single large geometry as well and for this merged geometry a multi-material is created. Now we can use it in a React component, in react-three like so: In react-three-renderer we need more code, on the one hand because multi-materials are not (yet) supported so we can not use the merged geometry, and on the other hand because of its higher granularity: let meshes = []; parsedModel.geometries.forEach((geometry, uuid) => { // get the right material for this geometry using the material index let material = parsedModel.materialArray[materialIndices.get(uuid)]; meshes.push( {createMaterial(material)} ); }) {meshes} The createMaterial method parses a Three.js material into a react-three-renderer component, see this code at github. Pros and cons Using React-bindings for Three.js results in very clean code. Usually you don't have a hierarchical overview of your 3D scene, but with React your scene is clearly laid out in a tree of components. As as bonus, you can debug your scene with the React browser tools. As we have seen in the Minecraft character configurator, using React is very efficient for applications that use composite components, and we have seen how smoothly React GUI controls can be connected to a 3D scene. In applications with a flat structure, for instance when you have a lot of 3D objects placed on the scene, the JSX code of your scenegraph becomes merely a long list which might be as hard to understand as the original Three.js representation of the scenegraph. However, with React you can split up such a long list in a breeze, for example by categorizing the 3D objects: Sometimes using React requires some extra steps, for instance when loading 3D models, and sometimes it might take a bit time to find the right way of implementing common Three.js functionality like for instance user controls or calling Three.js' own render method manually. To elaborate on the latter example: by default both react-three and react-three-renderer call Three.js' render function continuously by passing it to Window.requestAnimationFrame(). While this is a good choice for 3D games and animations, it is might be overkill in applications that have a more static scene like applications that simply show 3D models, or our Minecraft character configurator. In both libraries it is possible to turn off automatic rendering by setting a parameter on the scenegraph component, as you can see in the code of the Minecraft character configurator. Conclusion For the types of project that I have discussed above I would definitely recommend using React bindings for Three.js. Not only your code will be better set up and thus better maintainable, it will also speed up your work significantly once you have acquainted yourself with the workflow of React as well. Whether you should use react-three or react-three-renderer depends on your project. Both libraries are relatively new but as you can see on Github the code gets updated on a weekly basis, and moreover there are lively discussions going on in the issue trackers and issues and suggestions are quite swiftly picked up. Some final remarks that can help you make up your mind: react-three depends on Three.js r72 React version 0.14.2, react-three-renderer works with the most recent versions of both Three.js and React. react-three-renderer has not yet implemented all Three.js features, react-three does (mainly because its lesser granularity). in react-three the ray caster doesn't work i.c.w. controls like the OrbitControls, in react-three-renderer it does. both libraries provide excellent examples, studying these will give you a good grasp of the basic principles. Don't hesitate to get in touch with us, if you have any questions or remarks about this post. Feedback is much appreciated.

Threejs rotations

Daniel
Daniel

In this post we create a first person 3D setting, and we use rotations to accomplish this. In Threejs you create a scene like so: let scene = new THREE.Scene(); This is the root scene; all other 3D objects have to be added to this root scene to make them visible: let obj3D = new THREE.Object3D(); // create a 3D object scene.add(obj3D); The coordinate system of Threejs is a right handed system, which means that the positive z-axis is pointing towards you: Left and right handed system In the picture below you see a Threejs scene. The red line is the x-axis, the green line the y-axis and the blue line the z-axis. A dotted line indicates a negative axis. The black arrow in the yellow square is an instance of THREE.PlaneBufferGeometry which is a subclass of THREE.Object3D, the basic 3D object in Threejs. On this PlaneBufferGeometry a texture of an arrow has been mapped. The PlaneBufferGeometry has been added to the root scene without any rotation or translation. ThreeJS axis What we can learn from this picture, is that a 3D object without translation and rotation gets added to the origin of the scene. And as far as rotation is concerned: a 3D object that has no rotation on any of the three axes stands perpendicular to our line of sight and has its upside in the direction of the positive y-axis. Creating a floor If we want to create a floor, or a ground for our 3D scene we have to rotate a plane -90° over the x-axis, play the following video to see how that works out: If you rotate a 3D object in Threejs you only change its rotation in relation to the root scene: its own coordinate system is not affected. In the video above the positive y-axis of the PlaneBufferGeometry gets aligned with negative z-axis of the root scene. What we could do as well, is to apply the rotation to the root scene as a whole; in that case the axes of the floor and the root scene stay aligned with each other: Both solutions are equally valid, but there is one caveat: if you choose to rotate the root scene, please make sure that you do not add the camera to the scene because that would cancel out the rotations, see this post. Moving over a floor Now lets create a proper floor and add the arrow object to the floor: Floor and arrow Next we want to move the arrow object one unit into the direction the arrow head is pointing. We use trigonometry to calculate the fraction of the unit the arrow object has to move over the x-axis and the fraction of the unit the arrow object has to move over the y-axis based on its rotation over the z-axis: arrow.position.x += unit * Math.cos(arrow.rotation.z); arrow.position.y += unit * Math.sin(arrow.rotation.z); Because the z-rotation of the arrow object is 0° this boils down to: Math.cos(0) = 1; Math.sin(0) = 0; arrow.position.x += unit; arrow.position.y += 0; The translation of the arrow object on the x-axis is 1 unit, and the translation on the y-axis is 0, which means that the arrow object is moving over the red line (the x-axis) to the right instead of over the green line (the y-axis) away from us. This is rather counter intuitive, and it is the result of the fact that in Threejs the top of a 0° rotated 3D object is in the direction of the positive y-axis, which is a very understandable decision because the y-axis is usually the vertical/upright axis. We can fix this by rotating the floor or the root scene 90° over the z-axis. Lets move the root scene so the axes of the floor stay aligned with the axes of the root scene: The arrow object is moving away from us but the head of the arrow points a the wrong direction. The rotation of the arrow object is still is 0°, but the texture on the arrow object (the PlaneBufferGeometry instance) makes us believe that the arrow object has a rotation of -90°. We fix this in the arrow object itself by rotating the texture 90° which makes the direction of the arrow head consistent with the rotation of the PlaneBufferGeometry that it is applied to. Conclusion If we rotate the root scene (or the floor) -90° over the x-axis, the y-axis becomes the 'away into the distance' axis, the natural axis that we want to move along when moving straight forward. But because the natural angle of a straight forward movement is 0°, the x-axis actually is the most natural axis for moving forward, so we rotate the root scene (or the floor) 90° over the z-axis as well to swap the x and the y-axis. Now we have created the ideal situation for a first person setting. You can play yourself with the final result. Code is available at GitHub.

WebVR and Three.js

Daniel
Daniel

Let's start simple. Our first WebVR application is a big cube in Threejs and a simple 3D scene floating inside that cube. The 3D scene consists of a transparent floor with a few simple rectangular shapes placed on it. On each side of the cube we print the name and direction of the axis towards which the side is facing. Lets call this cube the "orientation cube", and lets call the 3D scene "the world" because that is what it is from the user's perspective. Both the orientation cube and the world are directly added to the root scene, which is the scene you create with the code rootScene = new THREE.Scene(). When wearing an Oculus, you are positioned in the middle of this orientation cube and you can look to all sides of the cube by moving your head. You can move around in the world with the arrow keys of your keyboard. The API To get the rotation and position data of the Oculus using javascript, we first query for VR devices: if(navigator.getVRDevices){ // getVRDevices returns a promise navigator.getVRDevices().then( // on fulfilled callback returns an array containing all detected VR devices function onFulFilled(data){ detectedVRDevices = data; onFulfilled(deviceData); } ); } The detected VR devices can be instances of PositionSensorVRDevice or instances of HMDVRDevice. PositionSensorVRDevice instances are objects that contain data about rotation, position and velocity of movement of the headset. HMDVRDevice instances are objects that contain information such as the distance between the lenses, the distance between the lenses and the displays, the resolution of the displays and so on. This information is needed for the browser to render the scene in stereo with barrel distortion, like so: ThreeJS barrel distortion To get the rotation and position data from the PositionSensorVRDevice we need to call its getState() method as frequently as we want to update the scene. function vrRenderLoop(){ let state = vrInput.getState(); let orientation = state.orientation; let position = state.position; if(orientation !== null){ // do something with the orientation, // for instance rotate the camera accordingly } if(position !== null){ // do something with the position, // for instance adjust the distance between the camera and the scene } // render the scene render(); // get the new state as soon as possible requestAnimationFrame(vrRenderLoop); } Putting it together For our first application we only use the orientation data of the Oculus. We use this data to set the rotation of the camera which is rather straightforward: let state = vrInput.getState(); camera.quaternion.copy(state.orientation); Usually when you want to walk around in a 3D world as a First Person you move and rotate the camera in the desired direction, but in this case this is not possible because the camera's rotation is controlled by the Oculus. Instead we do the reverse; keeping the camera at a fixed position while moving and rotating the world. To get this to work properly, we add an extra pivot to our root scene and we add the world as a child to the pivot: camera root scene ↳ orientation cube ↳ pivot ↳ world The camera (the user) stays fixed at the same position as the pivot, but it can rotate independently of the pivot. This happens if you rotate your head while wearing the Oculus. If we want to rotate we world, we rotate the pivot. If we want to move forward in the world, we move the world backwards over the pivot, see this video: You can try it yourself with the live version; the arrow keys up and down control the translation of the world and the arrow keys left and right the rotation of the pivot. The source code is available at GitHub. According to Threejs, the arrow in the picture above has a rotation of 0° on the z-axis (and on the other 2 axes as well for that matter). However in trigonometry a 0° rotation over the z-axis is a vector in the direction of the positive x-axis, so the real rotation of the arrow in the picture is 90°. If we want to move the arrow one unit in the direction towards which the arrow is rotated, we use trigonometry to calculate the fraction of the unit the arrow has to move over the x-axis and over the y-axis, therefor we have to compensate for Threejs' unorthodox reading of a rotation of 0°: arrow.position.x += unit * Math.cos(arrow.rotation.z + Math.PI/2); arrow.position.y += unit * Math.sin(arrow.rotation.z + Math.PI/2); Another thing that we can learn from the image above is that in order to make a ground for our 3D scene, we need to rotate a the arrow by -90° over the x-axis: Instead of rotating the arrow to make a floor, you could also choose to rotate the whole root scene. And you could perform a rotation over the z-axis at the same time to compensate for the fact that 0° in Threejs is actually a rotation of 90°: scene.rotation.x -= Math.PI/2 scene.rotation.z += Math.PI/2 If you choose to rotate the scene, please make sure that you do not add the camera to the scene, see next section. --> About the camera in Threejs The camera in Threejs is on the same hierarchical level as the root scene by default. Which is like a cameraman who is filming a play on a stage while standing in the audience; theoretically both the stage and the cameraman can move, independently of each other. If you add the camera to the root scene then it is like the cameraman stands on the stage while filming the play; if you move the stage, the cameraman will move as well. You can also add a camera to any 3D object inside the root scene. This is like the cameraman standing on a cart on the stage while filming the play; the cameraman can move independently of the stage, but if the stage moves the cameraman and her cart will move as well. In our application the camera is fully controlled by the Oculus, so the first scenario is the best option. This is comes in handy, since we have applied rotations to the root scene (see in this post). As a consequence, if we add the camera to the scene, the rotations of the scene will have no effect. Here is an example of a situation whereby the scene rotates while the camera is added to that same scene: Note that in most Threejs examples you find online it does not make any difference whether or not the camera is added to the root scene, but in our case it is very important. The result We have made two screencasts of the result from the output rendered to the Oculus: -->