My answer to your question is YES and NO - when the images are first taken, they are regular images. Once they get processed, or orthorectified, they become orthophotos. So, in my experience, orthophotos are not derived from platform type, but by the process of "fixing" an image to get rid of the effects of shooting from a lens. Both aerial photography and satellites use some form of lens on their sensors, so both can produce orthophotos.
For example, see the following photo:
On the left, we see a traditional image being shot; the point where all the light rays cross each other being the focal point of the camera, and the flat surface at the top being the film or CCD of the imaging device. At this point, the only true representation of the terrain is the light beam being shot straight down ("nadir"), and every other return is giving us a slightly more distorted image the more we move away from center. This is why we see tall buildings "leaning" in imagery away from the focal point of the image - if the building was at the true center of the photo, we'd only see the top of it.
By orthorectifying imagery, we get what we see on the right, which is a mathematical transformation/interpolation of the imagery using known ground control points to create an image that (almost) eliminates the warp we get as we move away from center. It's almost as if every point in the image has now become nadir. Because of this, our scale remains constant throughout the entire image (ie 1cm=50m), whereas in an unprocessed image, our scale would decrease the more we moved away from the center of the image.
Hope this helps!
Todd