Ever since James Cameron’s Terminator 2: Judgment Day was released in 1991, I’ve been reading about the many ways ILM, led by visual effects supervisor Dennis Muren, had to basically invent new ways to realise the CG ‘liquid metal’ T-1000 shots in that film, of which there are surprisingly few. Tools like ‘Make Sticky’ and ‘Body Sock’ are ones that I’d heard referenced several times, but I’ve always wanted to know more about how those pieces of software were made.
So, over the past few months, leading up to the re-release of Terminator 2 in 3D, I’ve been chatting to the artists behind the technology who were there at the time. This was when ILM was based in San Rafael, and when its computer graphics department was still astonishingly small. Yet despite the obvious challenges in wrangling this nascent technology, the studio had been buoyed by the promising results on a few previous efforts, including Cameron’s The Abyss, and by the possibilities that digital visual effects could bring to modern-day filmmaking.
For this special retro oral history, vfxblog goes back in time with more than a dozen ILMers (their original screen credits appear in parentheses) to discuss the development of key CGI tools and techniques for the VFX Oscar winning Terminator 2, how they worked with early animation packages like Alias, and how a selection of the most memorable shots in the film – forever etched into the history of visual effects – came to be.
Gearing up the computer graphics department
Tom Williams (computer graphics shot supervisor): I actually worked full-time for both Pixar and ILM for most of T2. Then I realised that was really dangerous. I would fall asleep, driving home once, and freaked myself out and realised you can’t really do that. So towards the end of T2 I went over to ILM full time. The way I got there originally was, I got invited by [visual effects producer] Janet Healy and [visual effects supervisor] Dennis Muren because I had worked at a company called Alias, which did modelling and animation tools.
George Joblove (computer graphics shot supervisor): Each single gig at ILM was a small step above what we’d done before. And we were fighting with the limited computing resources we had at the time. We had done The Abyss which was a big step forward in a couple ways. First of all, in demonstrating what was possible and achieving it. Second of all, working for Cameron who had that great vision for how it could be used in The Abyss. With that film, had we not been able to pull it off, there would have been ways to work around it. But I don’t think there was any such opportunity in T2.
Eric Enderton (computer graphics software developer): Terminator 2 was my first big movie. I saw The Abyss in the SIGGRAPH film show and thought: I want to work for those guys. Fortuitously the CG group had decided to hire their first tools writer. They had lots of software but it was all being written by the same people who were doing the shots. I was the first ‘software-only’ person in ILM computer graphics, which obviously was a huge learning experience and just an amazing time.
Jay Riddle (computer graphics shot supervisor): I was working at ILM for several years and had learned how to animate by sitting with John Lasseter when he was in the Graphics Group, which was part of the Computer Division of Lucasfilm at the time. They were using this vector graphics display that they used with their own in-house software that they’d written, and they had this frame buffer. They were still in our building, and then they moved out to one of the other Lucasfilm buildings while they were trying to spin off and get their own place, which they eventually did. And just as they left were doing The Abyss, and then they were kind of fully gone by the time T2 came around.
Michael Natkin (computer graphics software developer): I showed up at ILM in a suit, which was hilarious. I remember Eric Enderton and George Joblove and a few other folks took me up to the Ranch for lunch and showed me around and I was like, ‘Sure. Hell, yeah. I’ll do this. Let’s make it happen.’ I knew a lot about computer graphics, but nothing about movies whatsoever, so there was quite a learning curve.
Jonathan French (computer graphics animator): The process of even starting at [ILM] was kind of novel. I landed in SFO at 11am and after finding an airport car rental agency that would rent to someone 23 years old I drove straight to ILM in Marin. I think after I signed the NDA they immediately handed me the script to read, a small stapled booklet on ILM film terminology and tools, and then about ten people on the team kindly took me to lunch at an Afghan restaurant, which I am pretty sure was the only Afghan restaurant in Marin. The next morning in dailies I got introduced by Douglas Kay to the team in the screening theatre and everyone turned around and applauded. Three things go through your mind at that point: one, how supportive these people are, two, I better live up to my own expectations, and three, I better live up to theirs. It worked out ok.
Steve ‘Spaz’ Williams (computer graphics animation supervisor): I was at Alias and had been pushing for VA – video animation – but Alias was into the ID which stood for industrial design. At the time, VA was this very small budding thing. Then ILM called and they had purchased a cut of Alias, and so they first thing they had me do was a ride they were doing at Epcot Center called Body Wars – it was a fly-through of the heart. Then James Cameron came to ILM with The Abyss and from there we went on to Terminator 2.
“I’d point to a page and say, ‘Oh, well that looks interesting. How are you going to do that?’ And they’re like, ‘Oh, we don’t know yet.'” – John Schlag
Stefen Fangmeier (computer graphics shot supervisor): My role on T2 was as a technical director. Meaning that I would concentrate on rendering and compositing rather than modeling and animation. Back then, TDs really needed to have programming experience and since I have a computer science degree, these tasks were a natural fit for me. My tasks were to support the animators in technical areas which included writing C-shell scripts for frame to frame processing. Many of the features for doing this are now included in commercial software packages, but back then, most of the procedural, frame-by-frame batch processing had to be created from scratch.
Geofff Campbell (computer graphics animator): [I was at MPC] in the summer of 1990 when I received a phone call from ILM who wanted to set up a telephone interview regarding a new film they were starting work on. It turned out that Steve ‘Spaz’ Williams had reviewed my portfolio and had asked for the interview. The phone call came one morning at 2am and woke me out of my sleep catching me completely off guard. I remember slurring my speech while standing at the bottom of the landing freezing in my underwear. That was also before satellite phones and the static and delay of the transatlantic connection was almost comical.
Everyone on the ILM side were asking me serious questions about my abilities, schooling etc. but every now and then Steve would chime in with a question asking me things like did I have any pets? I told him I had a cat back in Toronto, and his follow up question was getting into specifics like my cat’s name and what type of cat food I served him. A week later I got the job and started working on Terminator 2 on Halloween day. Looking back I realized that Steve was serving me up a short hand during my London interview. I had already gotten the job and the interview was just a formality.
Tom Williams: When I came onto the show, ILM had all the storyboards up because there’s some particularly tricky shots that they were mulling over. They were just stuck. They were all color-coded. I was looking at them, and was like, ‘Oh yeah, the greens, I could do those and the yellows, that would be fun. I think I know how to do that.’ Then there was the blacks. I was like, ‘Wow.’ There was ‘head through bars’, and some of the stuff where the surfaces would merge with each other like when the T-1000’s hook hand gets stuck in the car and then melts back into his shoe. And ‘head through floor’. They said, ‘We want you to help us with the black ones and all the things with a black dot on it.’ I was like, ‘Awesome.’ When someone says, ‘Yeah, we’re not sure how to do this,’ you can’t do worse. My failure was to meet their expectations, I think.
John Schlag (computer graphics software developer): On my first day at work, I came in the door, they sat me down, and they showed me the storyboards, and they went through this binder. And I’d point to a page and say, ‘Oh, well that looks interesting. How are you going to do that?’ And they’re like, ‘Oh, we don’t know yet.’ I’m like, ‘You people are batshit! You’ve got to be kidding me! You bid this job, and it came in, but you don’t know how to do the work?’ So that was a big wake-up call on my first day at work in real visual effects, to realise you know, you make this Hail Mary bid, and lo and behold it comes in, and you’re celebrating, and then terrified.’
Michael Natkin: Actually, I also remember on my first day on the job, George Joblove took me down to watch them blow up the practical warehouse for Backdraft, which was amazing. It was a really neat time at ILM because it was right as the transition was happening from everything practical and optical to everything digital.
Jonathan French: The machine room which acted double duty as a night time render ‘farm’ was downstairs, near the Pit. The Pit is now a part of ILM folklore, but it was essentially Spaz’s space he shared with Mark Dippe and I think at various times Wade Howie, Jim Mitchell, and others. It was a fun place whenever I had reason to go down there, all 70’s scotch-stained shag carpet, hockey sticks and music posters, and sound proofed to the rest of the building. So you’d go in there and Stompin’ Tom Conners or Thin Lizzy would be on at full volume. I mean the kind of full volume where you open the door and your hair blows back. It was sort of as if the famous Horseshoe Tavern Bar in Toronto had been converted into a basement rec room.
“I give Cameron a lot of credit, the pseudopod from The Abyss and the liquid metal man in T2 are the same principle – they are what I would call the classic, perfect digital character.” – Mark Dippe
Anyway, on one of my first Friday nights working in the large graphics room I actually heard what I thought was bagpipes coming through the floor of the large graphics room. I asked Geoff Campbell, and he said, “Oh, ya that’s Spaz. He always plays bagpipes Friday nights.” That Spaz would later purchase a drag car, a tractor, a welder, and a working tank for his personal use also made total sense to me. For all these reasons the place and the people in it, and the work environment are probably not going to be replicated today. People who made things with their bare hands in their spare time.
I worked on an upper floor of C-building, along with a mix of people. The teams were often mixed across the building, which was actually good even if it wasn’t intentional. Alex Seiden worked on shaders a few feet away, John Schlag was writing new tools in a side room, Joe Letteri had just started a few weeks before me, Annabella Serra worked across from me, Christian Hogue on the Death Squad worked behind me, all from different teams. Joe was even on a different show. I think later I ended up down in the large graphics room downstairs, with Geoff, Stefen, John Berton, Doug, Lincoln Hu, the great and sadly missed Rich Cohen, Sandy Ford-Karpman, and others there. That was again a mix of teams, which was good.
Wait, can we actually do this?
George Joblove: I think we had cautious optimism. It just felt like we should be able to do it. We knew that there were going to be some tough challenges to solve but at the same time if felt like a really fun project that would be a great challenge and would be a great thing to accomplish.
Eric Enderton: Terminator 2 was this huge show because it had like 50 shots. I mean, today you can’t get out of bed for less than 300 shots.
Jay Riddle: When Robert Patrick is the actor playing the T-1000, it looks like one thing, but when we’ve got this chrome and poly-alloy character moving around, it’s like something weirdly different, right? And they had to kind of flow into each other, and re-form.
Jonathan French: For the majority of the show I was on a team comprised of Stephen Rosenbaum and John Nelson, and George Joblove helping keep us moving forward on our separate shots. The tools were evolving so rapidly it became a moving target for all of us to keep track of them, to be honest. The throughput of the software team was enormous, given their tiny size. All the developers were mega on the keyboard, but in all that time since I’ve never seen anyone type faster than Eric Enderton. I figured in future shows he’d be like guitarist Johnny Greenwood from Radiohead, wearing some custom wrist braces to keep his hands intact in front of a crowd of awestruck fans.
George Joblove: Chrome, in those days, was something that, you know, that computers did well. The idea of making it liquid, making it walk like a person, integrating it into a live action scene completely convincingly – those were all real challenges. But making a chrome character was going to be a lot easier than making a furry one would have been.
Doug Smythe (computer graphics shot supervisor): At that time, too, the staff at ILM for doing computer graphics was pretty small. It was like a dozen or so people, and we had to grow the department very quickly, so there was a lot of hiring that had to be done. We had divided up the shots and the teams.
“It was Terminator 2 where I thought, ‘Oh my god, we’re going to buy a million dollars worth of computers for this – what a staggeringly large number.'” – Eric Enderton
George Joblove: Hardware and software back then was so expensive. I think if you look at the hard drive storage in 1990, a gigabyte of storage was $9,000. This was also still the age of SGI boxes because they made computers that were specifically optimised for doing graphics work and with the most bang for the buck that you could get. We had a network of SGI machines that included some large servers and then a bunch of work stations.
Doug Smythe: The tools that we had at the time, well, some things were inherited from Pixar when they split off. But we kept copies of the tools, or at least some of the tools that were developed at Lucasfilm, and then we had some sort of deal back and forth with Pixar, including to use RenderMan, because we would keep in touch with the guys and they were still next door for a while. And we collaborated to the degree that our separate businesses and legal departments would allow.
Jonathan French: I think on my first overnight take for dailies I consumed several extra CPUs in the render room that we had downstairs. They were basically jammed with 240 VGX and 340 VGX SGI machines, along with other older SGI boxes. But as a result I think someone else’s shot didn’t finish that morning. I think around that time maybe it was Brian Knepp or someone else on the software team wrote PA (processor allocator) which was a nice simple GUI that allowed you to allocate or release CPUs from your allotment for your overnight renders. I’m not sure if that had been around before, but to my knowledge it wasn’t in commercial software at the time, like you can get now with RenderPal, Deadline, et al.
Eric Enderton: It was a really rare situation where you knew the film was going to be big. That hardly ever happens. We worked on stuff that we thought was going to be terrible and it turned out to be great, and then some things that went more the other direction, but this was one you just knew it was going to be big. I got to read the script and I just thought it was great. And it was Terminator 2 where I thought, ‘Oh my god, we’re going to buy a million dollars worth of computers for this – what a staggeringly large number.’ Those 50 shots took us something like six months. I mean, that was all we could do. When I got there the CG group was 12 or 15 people and we had our meetings in the upstairs kitchen in C building. Then by the time I left it was almost the whole company – ILM had grown to 300 people and the great majority of that was CG.
George Joblove: Everything was done step by step with a lot of tests along the way guided by Dennis Muren who had great faith in what we could do. He was also excited about the prospects of being able to do things that hadn’t been done before.
Jonathan French: The VFX roles weren’t really segregated like they are now. Sure we had specialists, but I basically got given a shot and I figured, oh, ok I’m supposed to model, animate, procedural animate, texture, light, render, and comp this shot using all these tools and this proprietary shell compositor I’ve never seen. It never occurred to me I was only supposed to do one or two of those things. It was a real DIY vibe.
Mark Dippé (associate visual effects supervisor): I give Cameron a lot of credit, the pseudopod [from The Abyss] and the liquid metal man in T2 are the same principle – they are what I would call the classic, perfect digital character. It has all the aesthetic elements that a digital system can be, and excel at.
Out from under The Abyss
John Schlag: ILM’s big splash before Terminator 2 was The Abyss. You know, the water creature, the pseudopod. They called it, internally, ‘the water weenie.’ And they had this single monolithic software that created the creature. You make a spine curve and a series of edge, profile curves. They would lock those. And then you can provide it with a Cyberware face, and it would stick that on the end. And then there were water ripples that it would add throughout the whole thing. It was like everything that you needed to do that one creature in one programme. And the programme did only that.
So one of the first things I did on T2 was get my hands on that, and started disintegrating it. Like, pulling bits of it out and turning them into separate tools. There are some places in T2 where the T-1000 gets shot, and you can see liquid metal under the police uniform, and it is sort of rippling and healing. I made a tool to do that, with [computer graphics animator] Jonathan French for the bullet hole healing, for example, which came out of pulling apart the different tools.
Mark Dippé: The pseudopod from The Abyss was an abstract alien creature that had no relationship to humanness or even livingness. But for the T-1000, the big question was, how can you make it move and behave as if it’s a human inside, whatever you wanna call it, even though Robert Patrick in this case is not a human, he’s a T-1000, he’s a machine, but that was the big concern.
“We even originally included a limp Robert Patrick had from a football injury. I noticed it in the initial test that we shot with him.” – Steve ‘Spaz’ Williams
Jay Riddle: I’d been working at ILM in the camera department before getting into digital effects. For our animation tools, there were a number of visits to Wavefront Technologies. Initially, Alias was kind of being ruled out, because it was considered a toy and not really a legitimate contender.
Part of that was because there were some personal relationships between the people that worked at ILM and Wavefront, so it felt like, ‘Oh we know them’, so if something goes wrong or we need something fixed or changed, they’ll respond to us, and as soon as we signed the Wavefront deal, that person who was at Wavefront left! So, it kind of took away the whole argument of why that was the great advantage. And in fact, from an artist standpoint, which I was doing in modelling and animating, Alias was much easier to use.
Wavefront was definitely the industry leader at the time, and had a lot of great features, and a huge community around it, and a lot of people that were good at it, and so ILM choosing to go the Alias route was kind of, well, people just kind of went, ‘What? You’re going with Alias?’ But it really legitimised Alias as a piece of software.
And really, what we did with Alias was, we hired Steve Williams from Alias itself for The Abyss, and he animated a spine moving around, and all of the little cross section circles along the path of the spine, and then Mark Dippé had written some software to kind of place those along the path, and make sure they were skinning properly and not twisting, and things like that, and Scott Anderson was also involved in that, as were a bunch of other people.
From real to digital
Steve ‘Spaz’ Williams: We had five separate categories of shots for Terminator 2. Now, we had what was called the pseudopod team, so we could re-purpose the data from The Abyss. But as opposed to refracting, the T-1000 was reflecting. Then we had the morph team, you know, which was the more two-dimensional transformations. Then we had the death team, that was the whole death sequence at the end. And then we had the [The Human Motion Group] team.
We had Robert Patrick come up to ILM and we painted a grid on him, a four inch by four inch grid all over his body, and he was like in a crucifix pose. We had him run, and he ended up running so much on a rubber mat that we had that he ended up blistering his feet, to the point where we had to cover his feet up.
So, there was no real motion capture at that time, at all, so we shot him with two VistaVision cameras exposing simultaneously. One from the front on an 85mm lens, and one from the side on a 50mm lens, and they’re firing simultaneously. So I can look at frame one from the front, and that would match frame one from the side. From there I basically rotoscoped Robert’s walk.
Mark Dippé: It was really through hand digitization not only of his body data but of his movement data that we created a database with a virtual character. It was all hand-built.
Steve ‘Spaz’ Williams: We even originally included a limp Robert had from a football injury. I noticed it in the initial test that we shot with him. So I had to try and correct that in the bone walk. So when I went and I reanimated CC1 for real when we got the plate photography I made a lot of corrections to that, because he was supposed to walk like a machine.
Mark Dippé: It is one of those things where it’s a little subtle, but you can see it, and it just came out of the rotoscoping.
Steve ‘Spaz’ Williams: So, we had what we called RP1 through to RP5. Robert Patrick – RP – that was the actual naming convention.
Mark Dippé: RP1 is the blob, an amorphous blob. RP2 is a humanoid smooth shape kinda like Silver Surfer. RP3 is a soft, sandblasted guy in a police uniform made out of metal, and RP4 is the sharp detail of the metallic liquid metal police guy, and then RP5 is live action.
Steve ‘Spaz’ Williams: Now, to get to all those RP versions, we had to break it all down. In the script it said he migrates from the blob version into a fully clothed version. That’s Cameron’s idea – so we had to translate that. So we thought, okay, we’ll break it into four stages. Let’s just do that in data, but the control vertices have to actually share the exact same properties. But they migrate in time. That’s essentially what the MO was at that point.
“Spaz was so good at it that he could literally click ahead of the menus appearing.” – Michael Natkin
Mark Dippé: We chose those ones because we felt, first of all it was hard to do any of this, but we felt those five stages were sufficient enough for us to achieve all the story ideas that were required. You know, he’s a formless blob, oh, he’s kind of a soft humanoid form. Oh, he looks kinda like a policeman. He is the policeman, to Robert Patrick.
Steve ‘Spaz’ Williams: If you look at Robert Patrick and what we call the RP4, which is just before it becomes the real guy, all that data of his head we collected using a cyber scanner. Then what we had to do is write an equation to actually smooth it all down and make it stupid, make it essentially like ice cream for RP2. So the data all had to be the same. You were not changing the amount of control vertices in the actual data. You had to run a smoothing algorithm over it.
Michael Natkin: Spaz was so good with Alias. Now, Alias was quite slow back in those days, and it had all these menus that you had to use. You’d click the bottom of the screen and a menu would pop up. Then, you’d look through it for the item you wanted, and then you’d click on that. Often, that would launch a submenu, and then you type in a couple numbers and press return, right? But it was super slow. It would do some operation. Spaz was so good at it that he could literally click ahead of the menus appearing. So he would click on the bottom of the screen, then click where the menu item was gonna be, then click where the submenu was gonna be, then type in the numbers, press return, then turn around, chat with you for a minute, and turn back around, and the screen would have done what he wanted.
Steve ‘Spaz’ Williams: In the script, the T-1000 is going to walk out of the fire and he’s going to, the term people used was ‘morph,’ but in fact it was model interpolation. He’s going to interpolate into the fully clothed version of Robert Patrick. So [the shot was called] CC1 where he migrates from RP2, which is what we call the ‘Oscar’ version, a smoothed-down T-1000, but he shares the exact same dataset or control vertices as RP4. And RP4, again, is the fully clothed version with the wrinkles and buttons. What I did is I hid all the buttons and the badge and the gun, I hid it inside his body cavity, and grew it out in time. The press called it morph. In fact, it was called model interpolation.
Geoff Campbell: Steve [Williams] had brought me on to work primarily with him on the T-1000 and I believe my first task was to take his detailed Robert Patrick model and make a smooth ‘Oscar’ like version for the liquid metal transitions. Today in just about any software that task would be a twenty minute job with a smoothing brush, but in those days the software was very limited and even a sophisticated package like Alias was ridiculously crude by todays standards. We were also using NURBs with overlapping control vertices so modeling was a very complicated process. Also there wasn’t a shaded GL mode when sculpting and on top of that you could only move one control point at a time.
They had something revolutionary at the time called Prop Mod which allowed you to select a cv and type in a number of cv’s in the surrounding u and v direction that you wanted to move with a fall off, but to use it you had to click down on the cv and wait for 5 seconds before you could drag your point to it’s new location. It was so slow I never bothered to use it. So for me sculpting was the tedious task of moving one point at a time. I used to joke that it was as intuitive as sculpting with chicken wire. The hardest part was sculpting those points in wireframe and not seeing the shaded form. You could only see the results of your sculpting if you clicked on the ‘quick shade’ option where your screen would go black for 5 minutes and then start building your image on the screen one line at a time. That was reserved for when you were close to finishing your model and you needed to see what the hell you had done all day. It also forced you to take a coffee break.
My first animation on T2 was of John Connor’s foster mother body transitioning back into the T-1000 and stepping over John’s dead foster father. We didn’t have inverse kinematics or constraints so you had to keep track of all your body rotations and when you overshot a particular joint’s rotation it could affect the whole arm or leg so animating was much more time consuming than it is today. Match moves were also not as accurate so you often had to cheat the feet sliding to a ground plane in order to make them appear to be locked to the floor one frame at a time.
Doug Smythe: In the hallways of ILM, we still have the little maquettes that were made of the five stages of the T-1000 and it starts from this very amorphous blob, which was actually just key frame pose of a spline surface to do whatever it needs to do, to different stages of levels of detail of Robert Patrick as silver, and then finally the live action actor.
But we didn’t have any way to go from the first to the second, or from the fourth to the fifth. So any one time went from blobby to the low-resolution humanoid version, that involved the morph. We got it as close as we could just in animation and then you let the morph take over. I think we had some sort of mesh dissolve thing so that we could take the higher resolution mesh, smooth it, and project it onto the smaller resolution mesh so we could actually transform from, we do a cut from one to the other. We may have used some morphs to help that, but I think we could do a geometric transformation as you get sharper and sharper silver detail.
Alex Seiden (computer graphics animator): One of the things I coded was an interactive lighting editor (called ‘led’) that would help artists position reflections. I rendered a ‘geometry buffer’ – pre-computed surface normals and positions – so that shading parameters and reflection planes could be re-positioned and quickly re-computed without having to do a full render. There were also some features that would allow you to place a reflection or specular highlight by clicking where on the image you wanted it to appear.
Sock stories
Steve ‘Spaz’ Williams: We were using Alias version 2.4.1. I had come up with a method to build using separate four sided b-splines for the T-1000. Then we hired a guy out of Toronto – Angus Poon who was an excellent code writer. If you have 4 sided b-spline patches and the character is breaking, well, he basically came up with ‘Sock’ [which would be revised and called ‘Body Sock’], a piece of code that stitched things together where it was all breaking.
Michael Natkin: Later, this kind of thing would be done with NURBs, but before that they were just b-spline patches. The process would be that they would make a still model that was perfect, all the surfaces were blended. Then, they would make the skeletons, and they would animate the skeletons. Of course, when you animate the skeletons, the splines would separate, right? If you imagine that your body is made up of plates of rigid armour, and then you reposition the arms and legs, or whatever, the armour plates are gonna separate, and, or overlap.
What Body Sock was doing was giving us a way to blend those patches back together. There’s certain parts of the body, particularly one of the biggest ones is the crotch area – all of these surfaces had four edges. They were rectangular, but the geometry of where the legs come together into the torso, there’s just not really a great way to do that with four-sided patches. Body Sock would basically let you specify different kinds of blends.
“A TD had to know much more of what went on ‘inside’ the software/computer then in order to achieve the desired results efficiently.” – Stefen Fangmeier
Eric Enderton: I worked on Body Sock, and Carl Frederick, Mike Natkin and Lincoln Hu were also a big part of it. The way to think about, imagine somebody’s knee. As you bend the knee there’s going to be a separation. If you just have a rigid upper leg and a rigid lower leg, you bend the knee, there’s going to be this break. Either that or interpenetration, or something funny is going to go on. The question was, how can we do that skeletal animation but then end up with a smooth surface? So, nowadays this is built into so much software that nobody even thinks about it, but at the time it was like, ‘Oh boy, how do we do this?’
I don’t remember how we arrived at this at all, but the name came from imagining, could we put a body sock, like a stretchy nylon fabric around all these individual animated pieces of the body and have it be a smooth surface then that would follow the whole body? That was the original idea.
It ended up that that’s not what we did, instead, what we did was stitching. All of this stuff was being modelled in uniform cubic b-spline surfaces, so, NURBs, only simpler.
There was a button, a menu item in Alias that would do this for two surfaces statically. It ignored the animation, it was just a modelling operation that would stitch two surfaces together. One of the things that they had asked me to do earlier was to make an animated version of that tool. I wrote a little programme that read in a scene, you gave it the names of two surfaces and it did this stitching operation on each frame and then wrote the animation back out.
I tried it on a plane next to another plane or something, and it seemed to work, so I gave it to Spaz and he picked it up and in 20 seconds he made an arm animation with a muscle bulge, and then hooked it up and typed in the command and tried it out and there was this arm flexing back and forth. That was my first real experience of an artist picking up a tool I had made and making this beautiful art with it that I could never have made myself. I had the sense that this artist was held down by chains that were the limitations of their tools and I had just cut one of the chains. What a great feeling. I was hooked.
What we did with Body Sock was make an automatic stitching tool that would go and stitch all the seams in the entire character each frame. To do this, you needed a Sock file that told you where each of those seams was. It’d name the two surfaces and which side of each, plus U, plus V, minus V, minus U. Somebody had to very carefully figure this out. For the simple seams the math is really simple. Then you can do something a little more complicated where you have more subdivisions on one side than on the other. As long as it’s an integer multiple it’s okay.
Then the corners, if you have four surfaces that come together at a corner you can sort of imagine this same math is not too bad. You just line up all the control points and average them. But if you have three surfaces or five surfaces or some other number coming together at a corner, which you do in a humanoid form, you have to have at least a couple points like that, the math is a lot less obvious. It took us a while of poking around to figure out how to do that.
Mimetic poly alloy. Wuh?
Alex Seiden: The first thing I did on T2 was to write the ‘poly alloy’ shader for the T-1000. The mercury-like surface of the T-1000 required very specific reflections, but in those days we didn’t have ray-tracing available in a production renderer. So I came up with a way to let us do enhanced, controllable reflection mapping. TDs could place multiple reflection planes in the scene with the animation, and inside the shader I’d do a quick hit test to see if the plane was hit. It was a RenderMan shader.