Mac and iOS developers have a number of different programming interfaces to bring objects (for example 3D configurators) to the screen. UIKit and AppKit have different image, color and path classes. With Core Animation you can move layers of objects. OpenGL allows you to render objects in 3D space. With AVFoundation you can play videos.
Core Graphics, also known as Quartz, is one of the oldest graphics related APIs on the platforms. Quartz forms the basis of most objects in 2D. Do you want to draw shapes, fill them with gradients and give them shadows? This is possible with Core Graphics. Core Graphics also allows you to assemble images on the screen and create PDFs.
Core Graphics is a fairly large API that covers the range from basic geometric data structures (such as points, sizes, vectors and rectangles) and calls to manipulate them, objects that render pixels in images or on the screen, to event handling. You can use Core Graphics to create “Event Taps” that allow you to listen to and manipulate the stream of events (mouse clicks, screen taps, random keyboard mashing) that enter the application.
Why does a Graphics API handle user events? How much does this have to do with technological progress? And a little knowledge of development history can provide an overview of why parts of the CG behave the way they do.
A PostScript in development history.
Back in the 1980s, graphics APIs were pretty primitive compared to what we have today. You can choose from a limited palette of colors, draw individual pixels, draw lines, and draw some basic shapes like rectangles and ellipses. You could set up clipping regions that would say to the world, “Hey, don’t draw here,” and sometimes you had some wild features like control over how wide the lines could be. Often there were “bit blitting” functions for copying pixel blocks. QuickDraw on the Mac had a cool feature called Regions that allowed you to create arbitrarily shaped areas and use them to paint through, cut, outline, or test. But in general, the APIs of that time were very pixel-oriented.
In 1985, Apple introduced LaserWriter, a printer that contained a microprocessor that was more powerful than the computer it was connected to, had 12 times more memory, and cost twice as much. This printer produced an incredibly beautiful output, thanks to a technology called PostScript.
PostScript is a stack-based computer language from Adobe that is comparable to FORTH. PostScript as a technology was designed to create vector graphics instead of being pixel-based. An interpreter for the PostScript language was embedded in the LaserWriter. So when an application wanted to print something on the Mac, the application generated code that was loaded into the printer and executed.
The presentation of the page as an application was a very important design decision. This allowed the program to display the contents of the page algorithmically, so that the device that executed the program could draw the page with the highest possible resolution. For most printers at that time it was 300 dpi. For others 1200 dpi. All from the same generated program.
In addition to rendering pages, PostScript is turing capable and can be treated as a universal programming language. You could even write a web server.
Accompanying CuBEs.
When the NeXT engineers designed their system, they chose PostScript as their rendering model. Display PostScript, a.k.a. DPS, has extended the PostScript model to work for a windowed computer screen. Deep down, however, was a PostScript interpreter. NeXT applications can implement their screen display in PostScript code and use the same code for printing. You can also wrap PostScript into C functions to call from the application code.
Display PostScript was the basis of user interaction. Events went through the DPS system and were then sent to applications.
NeXT was not the only windowing problem PostScript used at the time. Sun’s News had a PostScript interpreter typed in to drive the user’s interaction with the system.
Gallons of quartz.
Why don’t OS X and iOS use Display PostScript? The cause is, as so often, money. Adobe has charged a license fee for Display PostScript. Apple is also known for wanting to own as much of its technology stack as possible. By implementing the PostScript drawing model, but not really with PostScript, they were able to avoid paying the license fees and also own the core graphic code.
It is commonly said that Quartz is “based on” PDF. PDF is the PostScript character model without any programmability. Quartz has been designed in such a way that the typical use of the API is very closely aligned with the PDF support, making the creation of PDFs on the platform almost impossible.
Basic architecture.
All your Core Graphics drawing calls are executed in a “context” that is a collection of data structures and function pointers that controls how rendering takes place.
There are several contexts, such as NSWindowGraphicsContext. This special context takes the drawing commands issued by their code and then places the pixels in a part of the shared memory in the address space of their application. This memory is also shared with the Windows server. The Windows server takes all window surfaces from all running applications and layers them together on the screen.
Another Core Graphics context is an image context. Each character code you execute places pixels in a bitmap image. You can use this image to draw in other contexts or to save as PNG or JPEG in the file system. There is also a PDF context. The character code they execute is not converted into pixels, but into PDF commands and is stored in a file. Later, a PDF viewer can convert these PDF commands into something visible.
What’s coming soon?
In a later article, we will take a closer look at the contexts and some of the convenient APIs layered over Core Graphics.
Leave A Comment