Sunday, 15 May 2011

Facial Expressions

“Expression implies a revelation about the characteristics of a person, a message about something internal to the expresser.”
-          Anon, Web.
-         
Facial expressions are one of the most important means of communication amongst humans and, indeed, many species of animals around the world. In 3D animation, or any animation, they are vital story telling components. This essay will explore the importance of setting up facial expressions when animating as well as looking at examples of the character of ‘Mike Wazowski’ from the Pixar film Monsters Inc. (2001) to explore and describe these ideas.

Facial expressions were first seriously studied by Charles Darwin who suggested that “the main facial expressions are universal” (Nguyen, T. 2005, pg.4), in other words that many expressions are shared over various animal species, not merely humans. More recently, Paul Ekman has done extensive research into theory of facial expressions. Most famously, he has named six basic expressions: Surprise, Fear, Disgust, Anger, Happiness and Sadness (Ekman, P. 2003). In fact, specific methods have been devised for analyzing facial expressions, most famously the “Facial Action Coding System (FACS)” which studies muscle movements on the face in determining expressions. (Anon - Web. 2011)Most importantly when studying facial expressions is to remember is that “certain facial expressions are associated with particular human emotions” (Anon - Web, 2011, pg.1).

Moving into 3D animation, it is vitally important to the animator to understand and explore the relationship between facial expressions and the emotions they supposedly inform. This is because recognizing emotions through different facial expressions in a film (be it 3D or live action) allows the viewer to identify and relate with the characters of the film which, in turn, allows for a greater emotional connection with the film as well as allowing the story to be told effectively and, when performed well, without needing words. This is referred to as “nonverbal communication” (Wikipedia, 2011, pg.1). Recognizing and identifying emotions is a very important signifying process wherein there is a “characteristic of a person that is represented”, there is a “visual configuration that represents this characteristic”, there is a physical basis of this appearance” (i.e. wrinkles, skin, muscles etc.) and there is a third party which “perceives and interprets the signs” (Anon – Web, 2011). Using varieties of different facial expressions in 3D films therefore allows these signifying processes to occur with the viewer, thus creating not only semiotic but emotional connections with different characters and the film as a whole. Due to Darwin’s belief that facial expressions transverse both human cultures and species, well deformed facial expressions give characters a global relativity to any audience and can help the audience make informed decisions regarding character bias and narrative development, thus making for greater engagement.

I will now turn to a character that I am exploring with regards to facial expressions – Mike Wazowski from Monsters Inc. (2001). Wazowski plays the one-eyed side kick to the story’s main character, ‘Sully’, and provides much of the comic impetus throughout the film. I specifically chose him as my own modeled 3D character, ‘Wart’, is both one-eyed and completely imaginary. The first expression I will explore of Mike’s is anger:


In both cases, there are certain overlapping features which seem to define the emotion of anger. Mainly, in my opinion, his anger is determined by the ‘frowning’ of his upper eyelid which gives his eye a stern appearance. This is coupled by the mouth, which shows two variations of anger – in the first picture it is a mouth open in shouting, complete with lips which turn it into somewhat of a snarl. The second mouth looks like clenched teeth – a very common attribute of anger. From this first example it is easy to notice how, even in a completely imagined character, certain physical deformations on the face relay universal emotional connotations.

The second emotion for Wazowski is dazy-eyed romance:
 

Such a romantic expression is interesting as it actually combines two more recognizable facial expressions into one; these are happiness and laziness. It is easy to notice both but when looked at together (as well as considering the rest of the frame), one associates these two expressions as one single expression of romance.

The third facial expression for Wazowski shows shock or surprise:



Again, the physical appearance determines the emotion conveyed. In this case, the eye is wide open (as if to try see better due to apparent disbelief) and the jaw is dropped rather low, leaving the mouth to gape open. Surprise, or shock, has been a firm favourite of animated films (especially the famous ‘jaw dropping’ sequences) because it uses very large physical deformations that can be exaggerated for comic effect, as is apparent in both of the above examples.

Lastly is the emotion of happiness or contentment (which are not necessarily the same, but are relative).


The first screenshot clearly shows happiness (a rather eager type too). Once again, the eye is wide open but the expression here is in the mouth. It is also wide open but what defines it depicting happiness is the upward slant of the two corners of the mouth. A very upstanding body helps exaggerate the confidence which is brought on by happiness.  The second example shows contentment, a somewhat more relaxed form of happiness. Here, Mike’s eyelid rests easily over his eye, an upper push on the top eyelid giving him a sense of quiet confidence. The mouth works in the same way – it is not open therefore it is not eager but, again, the upward slant of the corners makes the obvious signifying process towards the emotion of happiness. The body here is not upright as in the first picture; the happiness here is more relaxed and satisfied – content.

In conclusion, it can be seen that the use of different facial expressions is incredibly important for a 3D character. As shown by Mike Wazowski, it enables an engagement and emotional connection with characters who are neither human or real at all. As shown by Darwin, facial expressions transcend cultural, linguistic and animal boundaries and are understood by any viewer, even without a single word being said.

Works Cited:
1.       Anonymous. Emotion and Facial Expression. Web. 2011. http://face-and-emotion.com/dataface/emotion/expression.jsp
2.       Anonymous. Facial Expression : A Primary Communication System. Web. 2011. http://face-and-emotion.com/dataface/expression/expression.jsp
3.       Doctor, Lee (dir.). Monsters Inc. Pixar Films, Distributed by Walt Disney Pictures. 2001.
4.       Ekman, Paul. Unmasking the Face : A guide to recognizing emotions from facial clues. Cambridge, MA : Malor Books, 2003.
5.       Nguyen, Thuy. Universals in Facial Expression. Druck und Bindung : Books on Demand. Norderstedt, Germany. 2005
6.       Wikipedia. Facial Expression. Web. 2011. http://en.wikipedia.org/wiki/Facial_expression

Sunday, 10 April 2011

Customizing Toolbars and Controls

Customization of one’s scenes when working in XSI becomes a very practical and useful element when using the program and allows the user to “take control of the user interface” (Ambrosius, L. 2007, pg.56). This essay will discuss the setting up of various custom toolbars and explore the advantages they offer the user. I will also look at controls which I have setup in my project and how they will benefit future productions.

Firstly, let us explore and define what a custom toolbar is. Essentially, a toolbar is a place wherein controls or functions can be placed. These controls or functions are set up manually by the user according to a particular technical need he/she may require in a certain scene. Kxcad.net notes that such controls may be used “to hold commonly used tools and presets” which are not specifically laid out in the program controls. (2011, pg.1). In XSI a method of creating a relatively simple command toolbar consists of using the script controls within the program to define the specific parameters of the control’s function. As these are additions to the program, they are referred to as “Non–self-installing Script-based Custom Commands” and become embedded in a custom toolbar which can be selected in XSI’s view options. (Softimage Wiki, 2011, pg.1).

I will now explore some options one may use when setting up a custom toolbar. Firstly, in terms of character setups, there is the option to quickly move between the translate, scale and rotation tools. While one of these three functions may be automatically assigned when clicking on a certain feature (this is done in the Transform Setup button from Property in XSI) it is generally useful to have the others also readily at hand to increase the efficiency of one’s workflow. To achieve this, one simply needs to have the specific function selected, enter the script function and copy the specific function into a custom toolbar setup. This will now create a button which can easily be used to swap between the functions. Having such toolbars is useful, but one must maintain an organization of the various buttons by labeling them (either by text or a thumbnail picture) as well as splitting toolbars up into certain categories; failing to do this will not only clutter the workspace but will confuse the user and defeat the initial purpose of using such toolbars.

Many other functions are readily available to be customised (due to the practically endless possibilities that using scripts offer the user). One other I shall discuss is the use of global controls. Global controls can be created customarily to control many functions within a particular area of the project. For example, the various finger curls which are setup for each finger can all be controlled together as a hand by creating a function which allows all the single finger controls to be moved simultaneously. Furthermore, controls can be made to quickly access functions which otherwise may be tucked deeply within the scene explorer and inconvenient to find in a short space of time. Such a control once again gives the user the advantage of having better control over the program and easier access to hard-to-find functions. Additionally, as explained by Darren Brooker, functions which may have the incorrect default settings (for example, camera settings in XSI are usually set by default to NTSC as opposed to PAL) can be changed by the user and then, by copying the change from the script history into a toolbar, will be maintained with the new settings by simply clicking on the new button (Brooker, D. 2003, pg.264).
I will now explore the custom toolbars I have set up in my rig scene in order to improve the workflow and efficiency of my character setup. Before setting up the toolbars I first set the default functions for various body parts. For the feet I set translate as the primary function as feet generally move through space more than rotate within it (this would be the place of bone joints such as elbows and ankles). The same was done for the arms and the central positioning of the character. I then set the rotation controls for both feet by creating a custom toolbar (as explained above). To organise the controls (and make it easier to differentiate between the different controls) I used a bitmap thumbnail picture which I had created in Photoshop to easily show which control performed which function. For example, the left foot rotation had a picture of a wireframe foot with circling arrows around it as well as the letter ‘L’. The same logic applied to the right foot.
Furthermore, I made a hip rotation control in the same way but put it on its own toolbar to differentiate it from the feet, which are in a different location in the body. I then laid out the two toolbars on opposite sides of the interface, naming them ‘Foot Control Toolbar’ and ‘Main Body Toolbar’.

In conclusion, I have seen that the use of custom toolbars greatly increases the workflow efficiency of one’s production. This is done by allowing the user to personalise and take control of his/her interface to meet his/her needs with the creation of extra functions. It also allows the user to quickly access program functions which may be otherwise hard to locate.

Works Cited:
  1. Anon. Custom Toolbars. Web. http://www.kxcad.net/Softimage_XSI/Softimage_XSI_Documentation/toolbars_shelves_CustomToolbars.htm 2011.
  2. Ambrosius, Lee. AutoCAD 2008 3D Modeling Workbook for Dummies. Wiley Publishing Inc. New Jersey. 2007.
  3. Brooker, Darren. Essential CG Lighting Techniques. Focal Press. Burlington, MA. 2003.
  4. Softimage Wiki. Creating Non-Self Installing Script-Based Custom Commands. Web. http://softimage.wiki.softimage.com/xsidocs/custom_commands_CreatingScriptbasedCustomCommands.htm 2011.

       

Sunday, 3 April 2011

Rigging Essay

A well made character rig works like an extension of the animator. It does what the animator expects, when they expect it.”
Jason Schleifer, from Cheryl Cabrera. 2008, pg.9
To animate any character in a 3D software program requires what is known as a rig. A rig is essentially a system of skeletal-like deformers (a series of linked bones and joints) which are then manipulated by control points (Valve Community, Web, 2011). This essay will discuss how the default rig in Softimage XSI works and compare it to another manually created rig which has been built by a 3D animator. It will then discuss the importance of creating a good rig and the correct procedure of going about such a creation.

Most modern 3D software packages come with default rig setups which may serve as a template to creating a new rigging setup, or may be perfected as they are. Rigs may, of course, be created from scratch (which allows for greater creative control over animating one’s character) but, for many, producing these character rigs “can be a daunting task” as serious attention needs to be paid to many different areas controlling deformation and animation. (Cabrera, C. 2008, pg75). Default rigs then, although they may not initially offer the same creative freedom a manually constructed rig can, are highly effective for many animators in creating animatable characters in a shorter period of time.

In order to setup a default rig in XSI, one first needs to create what is known as a Biped Guide (XSI User Guide). This is not the actual rig which will be used but rather, as the name suggests, a guide to setting up the final rig. It basically provides a skeletal reference which can be manipulated within the chosen character mesh. These reference points are moved until they match their respective features on the character mesh surrounding them i.e. finger bones are matched to the fingers on the mesh and so on. Below is an example of a biped guide:
One can also, at this point, toggle with the IK/FK controls within the guide in order to understand how the skeleton structure within the character will move which will essentially allow better animation of one’s character. IK/FK controls can be understood, at a basic level, as how the angles of bone joints relate with one another (with regards to parental hierarchies) in a 3D space. IK, or Inverse Kinematics, is “how the child node, as it moves, affects all the parents’ position and orientation values” (Web, 2011) and FK, or Forward Kinematics, refers to “the effect on the child nodes as the parent moves or rotates” (Web, 2011). Once these positions are all finalized, the biped guide can now be transformed into a proper rig. This rig is enveloped (effected upon) to the character mesh and now allows the character to be animatable.

This default rig, once weighted properly, is rather effective. It has a very similar structure to the human skeletal system in terms of its bone count/positioning and its joints and therefore allows for realistic movement (although it’s spine is probably too simplistic). It lacks, however, effective control points to animate the character on a more global scale. These must be created by the user which can in itself become a complex process. It also ‘behaves’ rather unpredictably when rotating the character’s torso areas and is not easy to resume the default position once these controls have been changed.

The second rig, which was sourced, is called Nano Man, created manually by an XSI user and outsourced as freeware. This rig has many of the essentials that the default rig contains but has extra features. Most useful is its global controllers which allow the user to manipulate larger areas of the character’s body simultaneously and with apparent ease as opposed to the default rig’s very limited global manipulation controls. In terms of its skeletal structure, it is also much more effective than the default rig, especially if one notes the spine. In this Nano Man, there are five spine bones (each with corresponding joints, roots and effectors) which significantly increases the amount of character flexibility available to the animator. The only real disadvantage of this rig is that, since it was created manually, it may take awhile for a new user to become familiar with the creator’s rigging style and how they chose to create control points and so on. Besides this, the Nano Man is an effective, quick to animate, rig which allows for great animation flexibility.



Finally, essential to rigging is organization. This goes all the way from character design, up until its animation. When designing the character, think of how it needs to move and therefore how it’s skeletal structure will look to achieve such animations. With this in mind, model the character with the skeletal features in mind. When it finally comes around to rigging the character, one can avoid a bad rig setup if one has designed and modeled the character effectively.

In terms of the rigging process itself, it becomes highly important in understanding how the hierarchies within the rig work. To make these hierarchies easier, it is vitally important to name the rig components. Kim Lee gives an example; “if your character is called Jester, name your bones jester-bone-01, jester-bone-02 and so on.” She also notes that naming becomes very important when merging one’s scenes together. (2002, pg. 165). Another useful technique is to universalize your control shapes – have all effectors as cubes, all roots as circles etc. This will make navigation between various different animatable parameters much easier.

In conclusion, this essay has defined what rigging in 3D software is and why it is so important to creating good animations. It has explored and discussed the default rig setup in Softimage XSI and has compared it to a custom built rig in order to understand how default rigs compare to manually created rigs (which concluded in the notion that a manual rig allows much more creative freedom and is therefore a better choice in the long run). Lastly, I have explored some tips and organizational strategies in rigging which allow for a more efficient process and which give the animator a greater grasp of his/her creation, subsequently allowing for a greater final product.

(1047 words)

Works Cited:
  1. Cabrera, Cheryl. An Essential Introduction to Maya Character Rigging. 1st Ed. Elsevier. Oxford, UK. 2008.
  2. Lee, Kim. Inside 3Ds Max 4, Volume 1. 5th Ed. New Riders Publishing. USA. 2002.
  3. Real Illusion. What Is IK/FK. Web. http://www.reallusion.com/iclone/Help/iClone3/08_Animation/Motion_Layer/What_is_IK_FK.htm. 2011.
  4. Softimage XSI 2011. User Guide. 2011
  5. Valve Community. Rigging in XSI. Web. http://developer.valvesoftware.com/wiki/Rigging_your_Custom_Character . 04/03/2011.


Tuesday, 22 March 2011

Lighting Scenes Motivation

Lighting Motivation

For my three different lighting set-ups/rigs I tried to experiment not only with different tones/stylistic elements but also with different lighting software techniques that XSI provides.

For my first set-up, I wanted to continue with the stylisation of the retro 70s look I had aimed for with my texturing. I envisioned the room as being a sort of V.I.P. Section or private lounge off from the main section of a dance club (or discotheque). These rooms generally had a slightly diffuse (or 'underlit') scene, while being shaped or directional from certain light sources. The slightly dim lighting also allowed for an interesting interplay between light and shadow (which were not hard but still clearly evident). This technique helped achieve a certain ambience which was meant to invoke a 'cool' yet provocative atmosphere. Below is an reference example:



In terms of the rig, I used a slightly modified version of a three point lighting set-up. Essentially, my key light was sourced from the light 'emitting' from the dance floor outside the room. This light therefore has a warm colour temperature and it's luma has a red tinge (similar to the red ambient effect of the picture above). My fill light was a spot light placed opposite the direction of the key light (facing the window) and was on a lower intensity with a slightly pink tint. The backlight was split amongst three spot lights all pointing down the back wall, therefore providing a rounded edge to the light and also serving the purpose of working as a background light (as in a four point scheme). The final result is as below:


My second set-up was a night scene. At first I wanted to emanate the same V.I.P. room as it would be after the club closed and the lights went out but I didn't want to use the same red tones that would have had to accompany such a scenario. Instead I chose to locate the room within a starry night sky. However, in trying to keep with the 70s theme, I wanted the sky to be somewhat psychedelic as opposed to being realist, hence my final outside texture. Below is a reference for this scene:




The set-up was based on single point lighting (as in moonlight) but added extra lights to provide additional detail and focus within the scene. The key light was an infinite light to which I added a light blue colouring and angled it to make the window panes shine into the room as hard shadows (which is how I observed night light shadows to fall).  I then added a point light on the windowed wall to add some slight additional fill to the scene (as it appeared too dull with just the 'moonlight'). I used attenuation with this light to give the lighting some gradient so as not to appear balanced and therefore flat (night scenes are generally high in contrast and would not appear flat). I then used an additional spot light not as backlighting but rather to pull focus to the picture, which I felt added a little bit more character and focus to the scene. The result is as follows:



For my third scene, my main intention was to experiment with lights that would appear different visually, specifically with volumic lights. With this in mind I wanted to create a scene with a more theatrical tone in terms of lights. I wanted to create strong contrasts by using a single light set-up but with a volumic plug-in to create the feeling of theatrical lighting. I also wanted to use another single strong light to point to a specific object within the room to give the environment some sort of narrative inclination and, again, to increase the feeling of theatrical drama. Below are two references; the first refers to the nature of volumic lights that I wanted to achieve and the second shows the spot light with a very specific point/direction.   



In setting it up I used a spot light coming in from the open side of the room. I added the volumic property and tweaked it until it had an orange glow in its path (once again matching the colour scheme within the room). Adjacent to this light I aimed a second volumic spot at the wine glass on the table (to create a sort of narrative significance) but found I had to increase its intensity rather high for it to be visible in the path of the key spot. I also had to exclude other geometry which subsequently burnt out due to the high intensity of this light. The result is as follows:






Monday, 21 March 2011

Lighting Research Essay

“Lighting is more than just illumination that permits us to see the action”
Bordwell, David and Kristin Thompson. “Film Art: An Introduction” pg. 126

A well-lit scene is vitally important to any scene, whether in 3D software or in live action film. Besides its basic function of allowing the viewer to see the characters and environments in a particular setup, it adds incredible dynamics to the scene. It can add stylistic meaning, bring out certain textures and even play a role in informing the narrative (as a vital part of mise-en-scene).  In this essay, I will explore the setup and motivation for a fundamental lighting rig known as Three point Lighting. I will then compare this rig to another lighting example, Four Point Lighting, to show how different lighting setups can dramatically change and enhance the overall appearance of a scene.

The first rig to be explored is Three Point Lighting. According to Jan Ozer this setup “has its roots in lighting as art rather than lighting as a necessary evil for the camera to do its work” (2004, pg40). This setup therefore allows for objects in scenes to be lit in such a fashion that they do not appear ‘flat’ (the term used to describe lighting which remains unilateral throughout an environment and removes any field depth and shadow variation).

 The first light in a three point setup is known as the key light. Nicholas Boughen defines this light as being “the primary source of light” which “provides primary illumination”. (2007, pg69). The most obvious naturally occurring example of a key light would be the sun since it provides earth’s primary light source. In 3D software, this is often mimicked by using an infinite light. In a three point setup, the key light provides the most direct illumination on a subject and is usually placed at a slight angle to the object in order to give it a certain shadow fall off. Ozer describes this as ‘modelling’. (2004, pg40).

The second light in the setup is known as the fill light. This light serves to “illuminate areas that are shadowed from the key light” and therefore provides extra defining detail on the surface of a subject which may have been overshadowed by the key light. The intensity of the fill light should “be less than the key light” so that it fills in extra detail but does not become a key light in itself (Boughen, N. 2007, pg70).

The third light in the setup is known as the backlight or rim light. The main purpose of this light is to separate the subject from its background and, therefore, to provide a greater sense of depth within the scene. It also helps to “define the shape” and provide a “defined edge for blue or green screen shots” (Boughen, N. 2007, pg71). In terms of placement, the backlight is usually “on the same side as the key light” (Callow, R. 2008, pg1). The three point setup looks like the following:

The next rig to explore is the Four Point Lighting setup. This setup, as may be suggested, is basically the addition of an extra light. The effect produced can, however, drastically change the overall appearance of a scene. The four point setup is used for two main reasons: in portrait shots of people/characters and in general environment. Boughen describes the additional light as a “bounce light” and notes that this light “is reflected from the ground in front of objects” (2007, pg78). He also notes that where key, fill and backlights are usually lit from above, this bounce light shines on the subject from below and can add a subtle effect to the subject by filling in extra shadowed areas (especially the areas below the eyes).

For a general environment setup, the extra light used is referred to as the background light and is used “to give depth to the image by putting some mixture of light and shadow on the wall behind the subject or subjects” (Burley, Shane. 2009, pg1).  In terms of intensity, it works in the same fashion as a fill light but its placement can dramatically alter the appearance of a scene. Burley gives an example of an effective background light. He notes that it can be placed behind a window to cast windowpane shadows onto a wall within the room. This is able to add a stylistic element to the scene as well as give it a specific dramatic tone, something that a basic three point setup cannot always achieve. Below is an example of a Four Point Lighting setup:

The main difference between these two lighting setups is the stylistic/tonal quality that four point lighting can add (from an environmental point of view) and the extra definition and detail it can add (from a portrait point of view).  The problem of using four point as opposed to three point is that, if not setup correctly and with the correct relative intensity, it can possibly make the scene look like it has been flat lit (due to the fourth light filling in even more shadows). A big problem with using three point lighting is, as Boughen describes, is that it is “the most grossly overused and inappropriately used lighting setup in the world of CG (2007, pg77). It is good to note that an overlit scene “flattens everything and diminishes details” while an underlit scene can be “muddy, gray and rather lifeless” (Derakhshani, D. 2009, pg439). A well-lit scene generally has a balanced ratio of light to shadow and to exclude shadows altogether may make for a very dull and non-dramatic visual result.

In conclusion, I have defined and explored two very practical and useful lighting setups – three point lighting and four point lighting. I have explored both the technical requirements for each of these setups (from positioning to intensity) and have discussed how they may be used to inform certain dramatic requirements within a scene. I have lastly made a comparison between the two setups to show how, in general, lighting setups/rigs need very specific intension in their buildup as every setup comes with its own stylistic advantages and disadvantages.  It is the understanding of how the balance of light and shadow within a particular environment/character affects the overall tone and stylistic elements of a piece which defines a good lighting rig.

Works Cited:
1.      Bordwell, David and Kristin Thompson. Film Art: An Introduction. 2nd Ed. McGraw-Hill Book Co. Singapore. 1989
2.      Boughen, Nicholas. Lightwave v9 Lighting. Wordware Publishing Inc. Plano, Texas. 2007
3.      Burley, Shane. How To Do Four Point Lighting. http://www.brighthub.com/multimedia/video/articles/59485.aspx. Web. 2009
4.      Callow, Rhonda. How To Do The Three Point Lighting Technique. http://www.brighthub.com/multimedia/video/articles/13931.aspx.%20Web.%202008
6.      Ozer, Jan. Traveling Light. E-Media – The Digital Studio Magazine. 2004.



Sunday, 13 March 2011

Texturing Motivation (Room Model)

Texture Motivation for Model Room

My original texturing aesthetic surrounded on the bright neon colours prominent in 1970’s disco culture. I chose this original direction from a willingness to do the polar opposite of what a ‘regular’ room would look like. Unfortunately, shortly into this texturing I found myself being dissatisfied with the overall look of the room – partly due to my own inability to make such bright and contrasting colours work in a cohesive fashion and partly due to the fact that I do not actually like the neon colour palette.

I then researched more into similar styles relating to my original idea and found an interest in the 1970’s retro colour palette. Almost at a polar opposite to the neon colours of the disco scene, on the other side of the coin was a very earthy colour palette which was also very prominent at this time – one of oranges, yellows, browns and whites, similar to the image below:

Another feature was the use of repetitious geometric patterns (usually done within these colours) such as circles or squares or interesting combinations of shapes. Many of these shapes tended to be in patterns which would repeat in certain ways to give a new overall look, as in the examples below:

My new problem became one of balancing out these rather complex patterns with plainer textures. While looking at many different pictures I found that, while walls (and even roofs and floors) tended to have these elaborate patterned designs, much of the other items in such rooms (i.e. chairs, tables etc.) tended to be very plain, more focused on plain whites, browns or yellows. This provided me with the balance I needed and, as such, I went with one couch in brown and the other in yellow.

My final aesthetic concern was to have a theme within the main style. I chose to go with a sort of ‘artists pad’ (for lack of a better term). These little pads, or studios, were very popular amongst artists, musicians, writers and other artisans of the period as a communal place of relaxing, discussing world issues and, mainly, of creating art – whether it was in drawing, painting, music, film or writing. I chose this room to be a music studio of sorts. This accounts for the brick wall surface on the left side and the parquet floor as many different surfaces were usually put together in these creative spaces. Although it overlooks a discotheque, it is in itself much more in line with rock culture of the time. In particular, as I’m a big fan of 70s progressive rock, I thought I’d add touches of this genre to the room’s props. This is where the picture on the wall becomes relative (‘Can’ were a very obscure 1970s krautrock band), the magazine on the table (Showing Peter Gabriel – who, when with Genesis, epitomized everything that was progressive rock in the 1970s) and the Persian rug (many of the 70s bands in all genres would play concerts with the entire stage floor covered in Persian carpets – Led Zeppelin and Yes were the most popular purveyors). These additions made further sense when they adhered to the colour palette I was working within.
Overall, I am fond of the room’s aesthetic. Besides applying the textures, I enjoyed applying certain reflections, glosses, frosts (as on the table) and other texturing effects which give the room a more life-like look.

Texturing Essay


“To make a texture believable, you have to be able to convey to viewers exactly what the surface would feel like if they were to reach out and touch it.”
-          Leigh Van Der Byl, 2004, pg.3
In order to bring a 3D model to life as it were, it needs to have a surface and texture which would make it believable as a real world model. 3D programmes have created a range of dynamic capabilities when it comes to texturing surfaces of 3D objects. In this essay, I will explore 2D and 3D texture mapping techniques in terms of how they are each utilised in 3D software procedures and their comparative differences. I will also highlight the advantages and disadvantages of each of these methods.

I will firstly define and explore 2D texture mapping techniques. Hong Zhang defines 2D texturing as the mapping of a “2D image to the surface of a 3D object.” (2006, pg. 409). This effectively allows for photographic images taken from real world spaces to be projected on any object within a 3D space. The precise mapping of these 2D images onto 3D objects involves a technique called UV mapping. This involves the process of ‘pinning’ points of a 2D image to specified points on the 3D object, depending on how the user wishes the image to be projected. Peter Ratner notes the advantages of UV mapping, saying it “works well for irregular shapes” and “is used for precise placement” (2003, pg. 215).

3D software further provides the user with different methods (presets of sorts) of mapping these image projections depending on the shape of the 3D object to be textured. These are now highlighted and explained:
1.       Planar Projections – These are used on flat surfaces (such as walls and floors) and can be compared to “projecting a slide through a slide projector.” (Ratner, P. 2003, pg216).
2.       Cylindrical Projections – Such projections are used to wrap around a texture, “similar to a bark around a tree trunk” (Ratner, P. 2003, pg216). These projections are useful for objects like poles, sticks etc.
3.       Spherical Projections – These projections ‘roll’ around the object “as if you were wrapping skin around a ball” (Ratner, P. 2003, pg216). Rounded objects like balls, planets and light bulbs use this method.
4.       Cubic Projections – Cubic projections wrap around 6 sided objects. These objects can be anything from a fridge to a CD case.
In addition to these methods, users define upon which axis/axes the projection/s will be implemented. In a 3D space these are the x, y and z axes. The combination of the different projection methods coupled with the ability to use them in any variation of the axes makes for very specific and accurate mapping capabilities.

By contrast, 3D texture mapping does not work on 2D based images. Instead, as Isaac Victor Kerlow defines, 3D texture maps are “solid textures that exist on the surface of an object as well as inside the object” (2000, pg.255). These textures are generated by algorithms performed by the computer and are referred to as “procedural texture maps” (2000, pg.255). Procedural as a term comes from computer science jargon which serves to “distinguish entities that are described by program code rather than by data structures” (Ebert, David S. 2003, pg12).  These mathematical functions performed by the computer render abstract and seemingly random images which then occur throughout 3D object to which they are applied.

 Such a technique is useful for 3D objects which have a recurring, yet random, pattern which pervades the entire object, for example, marble or wood. They also become useful in environment simulation and have indeed been used since the earliest days of 3D. Ebert notes that early 3D practitioners like Schacter and Ahuja used a 3D texture generator known as Fourier synthesis to “generate texture imagery for flight simulators” (2003, pg.11).

Comparatively, each of these techniques (2D texturing versus 3D texturing) has both its advantages and disadvantages.
Let us explore some of the main advantages of 2D texturing. Firstly, 2D textures can be practically anything which is a photograph/painting/drawing/bitmap image and this allows for a very realistic appearance on the 3D object (if mapped effectively). 2D texturing also provides a quicker rendering of the image (even though the image file would generally be larger than a 3D textured one). Also, due to the very specific nature of UV mapping, the user is able to define the exact parts of an object he/she wishes the image to be projected upon. Ratner also notes that 2D texturing is most common in the industry. (2003, pg.215).

On the other hand, there are also notable disadvantages. First and foremost, 2D texturing does not occupy a 3D space but rather requires “specific mapping coordinates in order to be rendered correctly” (Autodesk, 2006, pg.444). This means that if the user were to make a cross-section of an object to which they had applied a 2D image, only the surface would have the texture. Furthermore, 2D images applied to 3D models tend to look much flatter than a 3D generated texture. There is, of course the option of bump mapping which “simulates the appearance of a rough surface”, but even this technique can only provide a simulation of depth (Ratner, P. 2003, pg.221). 2D images also have the problem of becoming more and more pixellated the closer one zooms into them; this is due to the fact that they are fixed images of a fixed resolution. In addition, as Brian Ross notes, there can be problems of stretch marks, seams and tiling (if a small image is applied repeatedly over a large area.) (1998, pg.1).

Turning to 3D texturing, there are also notable advantages and disadvantages.
The first advantage of 3D texturing is that “the procedural representation is extremely compact” (Ebert, David S. 2003, pg.14). While it may take more time for the computer to render out the 3D texture, it is in fact a very small file (usually kilobytes) in comparison with a 2D image (usually in megabytes) (Ebert, David S. 2003, pg.14). Secondly, due to it being based in mathematical formulae, it has no fixed resolution and it will therefore remain fully detailed regardless of how close one zooms in on it. (Ebert, David S. 2003, pg.14). The mathematical formulae provide 3D texturing with the additional advantage of having infinite variations and therefore avoiding any seams or ‘tiling’ effects (Ross, B. 1998, pg.1). Ross further notes that such 3D generated textures have the additional option of being animatable.

In terms of disadvantages, it is noted that to properly generate a 3D texture can be difficult and often requires complex programming. (Ebert, David S. 2003, pg14). In conjunction with this, it is often easier, and more accurate, to use a found image which accurately represents the texture instead of trying to generate it. Furthermore, although 3D texture files are smaller than 2D image files, they often take more time for the computer to evaluate and render out than a 2D image would. Lastly, aliasing and anti-aliasing can be tricky and “is less likely to be taken care of automatically than it is in image-based texturing” (Ebert, David S. 2003, pg.15).

In conclusion, this essay has comparatively defined and analysed the techniques of both 2D and 3D texture mapping. It has looked at how both provide unique tools and approaches to creating believable textured objects, how each technique suits specific texture mapping purposes and finally it has made a comparative list of each method’s advantages and disadvantages.
(1150 words)

Works Cited:
1.      Autodesk. 3DS Max 9 Essentials. Autodesk, Canada. 2006
2.      Ebert, David S. Texturing and Modeling: A Procedural Approach. 3rd Ed. Morgan Kaufmann Publishers. San Francisco, CA. 2003.
3.      Kerlow, Isaac Victor. The Art of 3D Computer Animation and Imaging. 2nd Ed. John Wiley & Sons. New York, NY. 2000.
4.      Ratner, Peter. 3D Human Modeling and Animation. 2nd Ed. John Wiley & Sons. Hoboken, New Jersey. 2003.
5.      Ross, Brian. Texture Mapping. http://www.cosc.brocku.ca/Offerings/3P98/course/lectures/texture/. 1998. Web.
6.      Van Der Byl, Leigh. Lightwave 3D 8 Texturing. Wordware Publishing. Plano, Texas. 2004.
7.      Zhang, Hong. Computer Graphics Using Java 2D and 3D. Pearson Education. New Jersey. 2006.