Space tourism

Not sure if this belongs here. Will the space industry save the coal industry?

3 Likes

Would anyone making an investment in space travel or potential space tourists be concerned about the coal industry? Probably not.

An assumption here is that the coal industry might need saving. Hardly relevant, but perhaps this is just a comic distraction. :rofl:

Note:
Space launches are far from environmentally friendly. Solid booster rockets add carbon soot and alumina particles to the upper atmosphere. Hundreds of tonnes of CO2 are created by each SpaceX launch.

Even the old fashioned liquid oxygen powered rockets were CO2 monsters. Great PR that they burn clean and produce mostly water vapour. However every gram of oxygen came from a coal fired electricity grid used to extract the LOX from air, and every gram of hydrogen was extracted by processing a hydrocarbon fuel to give H2 + CO2.

But perhaps all is forgiven, as without space rockets there would be no GPS, no global mapping surveys, no global weather and environmental monitoring, no means to reliably assess many measurements science considers important to modern thinking. Where have all the polar bears gone? Perhaps burning more coal to do more in space is the way to better science. :wink:

In the interim “the Australian Space Agency” may be a more appropriate euphemism for an emptiness of policy than facilitating more practical endeavours.

1 Like

Anyone for a working holiday?

An out of this world article.

1 Like

As the article says, it’s computer generated images.

If you are referring to @Fred123’s link, it is a collage of pictures from 3 rovers made into a video. In that context it is computer generated, but it is Mars, not an animation.

The group stitched thousands of pictures together, converting them into ultra-high-definition of the Red Planet.

4 Likes

Still computer generated images @PhilT. The original has been substantially modified and enhanced using image processing.

The word ‘generated’ can be misleading in context. ‘Enhanced’ is probably more appropriate.

1 Like

No I don’t accept enhanced at all, but you are probably right about generated. But that is a nit picky point @PhilT. Maybe a better term for the process employed to produce these images is transformed. The end result is not what any of those explorers actually sent, but a composition processed from raw pixels by computer into what we see in this article.

By that definition nearly every digital image produced by a professional or advanced amateur photographer is ‘computer generated’. By the more common usage CGI does not include all images that have been digitally enhanced.

As there is a continuum from 100% computer created to unmodified images formed only by a lens and a sensor we could argue this for ever. I think though that your original comment without any further explanation would be likely to mislead people into thinking the video was generated by computer de novo which is not the case.

4 Likes

My few years at NASA (Langley and Johnson) might give me a different perspective. The downlink data was indeed a bit stream requiring reconstitution but to many the term generated suggests animation or CGI.

I’ll stand with my opinion. You are welcome to yours.

3 Likes

It is also valid for me to say, @syncretic, that what is in that video is a substantially transformed, enhanced, and computer processed version of what the actual video data was. Seems all pervasive these days.

The tech world in large define the two separately of what is Computer generated and what is Computer enhanced.

CGI

CEI

When someone uses CGI to describe an image they are not talking about CEI. Both use a computer as do Digital Cameras but I think most would say that an interpolated image from a Camera is a different beast to a CGI. So what the scientists etc have done with the Mar’s images is to add pixels from multiple photos of the same scene to add missing pixels that each photo originally didn’t contain due to the lack of enough receptors in each camera that took the photo or even multiple photos of the same area with the same camera.

Pervasive? Yes, we do it ourselves all the time in our brains. Indeed for photos even in the days of the old actual film photos there was grain present that photographers worked hard to reduce the impact of to a Viewer of the image. The human brain uses interpolation and enhancement. We process images, an example we think we see a man running in perfect non stop motion, the reality is we have stitched together a series of images (around 13 Hz ie 1 frame every 13th of a second) and our brain (our computer) works to fools us into seeing smooth uninterrupted movement.

From a gaming refresh rate discussion in support of the Brain adding images together to form one:

"what is the optimal temporal frequency of an object that you can detect?”

And studies have found that the answer is between 7 and 13 Hz. After that, our sensitivity to movement drops significantly. [Adrien Chopin, a post-doc researcher in cognitive sciences says] “When you want to do visual search, or multiple visual tracking or just interpret motion direction, your brain will take only 13 images out of a second of continuous flow, so you will average the other images that are in between into one image (my highlighting)”.

This averaging in many ways is really no different to what has been done with the Mar’s ones. The main difference is that the capture source/sources were cameras rather than human eyes.

3 Likes

The human brain is constantly doing this. Beginning photographers battle against the discrepancy between the image in their mind’s eye and the image they see on the monitor or on the print. This confusion is exacerbated when they see diagrams of the human eye and realise that the structure is very analogous to a camera. The optics behind what is going on is very similar, both the eye and the camera have limitations of depth of field, of field of view and dynamic range. Why then is the image in the camera such a limited subset of the image in the brain?

It is the brain/eye combination that makes the difference.

The eye makes up for the limits of depth of field by constantly re-focusing between near, far and in between. We compensate for lack of field of view by turning our eyes or head, we bring the part of the eye most sensitive to detail to bear on items of most interest. We allow for the limited sensitivity of the retina by altering the aperture, opening and closing the iris. But most of the time we do not notice this as the eye and the brain do all these adjustments and the consequent re-processing without conscious thought.

There is re-processing of the images going on constantly. We don’t think of the long distance image of the mountains 30 km away and the foreground image of the rose bush as being different as the mind stitches them together seamlessly from the various images of different focal depth our eye generated. To do this with a camera in one exposure you have to have a very narrow aperture to give greater depth of field. This has problems as you get very little light to form an image and there are forms of image degradation that come into play. With the wonders of digital processing you can mimic the mind by stitching together (focus stacking) several images and do away with the narrow iris.

Similarly, our brains perform panorama stitching as we turn our head to admire a scene that is much broader than the eye can gather at once. How odd that the digital panorama produces weird perspectives as 180 degrees or more are flattened out but the brain does not? The same with depth perspective. An experienced viewer can tell you if a camera image was taken with a long or short lens. When our eye looks into the distance or close up we perceive no such difference. The eye/brain does similar tricks with dynamic range that we only start to notice when the eye gets old and can no longer change aperture quickly.

Back to the ‘movie’ of mars. It was produced by stitching together many single images. The result is much more impressive and relatable than flipping trough thousands of stills. It is as though we were there because it allows our eyes to roam over the scene, the computer stitching is helping us by taking the data in all those stills and presenting it as if our eye was seeing it. The process is making up for the fact that our brain is only capable of doing this trick when the data comes from the camera it controls: the eyes.

The movie gives us a sensory experience more like if we were there. From that point of view it is more real than the collection of unprocessed images which are individually limited in depth and breadth of field and range.

2 Likes

Completely, unless one is here for the full half hour experience :wink:

3 Likes

Rolls Royce and the UK Space Agency to research nuclear powered space ecploration.

https://www.spacedaily.com/reports/Rolls_Royce_and_UK_Space_Agency_launch_study_into_nuclear_powered_space_exploration_999.html

Off-planet is probably the only sensible place for most nuclear fission. We obviously need something better than the offspring of a 9th century development (gunpowder) if we’re to make use of interplanetary space. Still, the thought of launching reactors, or even just nuclear fuel, from Earth makes me uncomfortable. What goes up, doesn’t always stay up as planned. A single accident could be disastrous. If they find uranium on the Moon, then maybe.

Nuclear power sources have been used in space for 60 years. Not as propulsion systems so far but as elecricity and heat sources. They are on the moon and mars. Many are in earth orbit in satelites. Some have crashed back to earth.

Indeed. The Voyager probes, for instance. But there’s a problem:

Indeed, and that poses risks. Whether those risks are acceptable is a matter of opinion. Comfortingly, NASA once shrugged of the risks because (among other things) the plutonium would probably land in the southern hemisphere.

1 Like

It won’t be PU-238 used for propulsion in spacecraft. It will be the same PU-239 and U-235 as used in power generation today in fission reactors. The first is easily made from very plentiful U-238 in breeder reactors, the second from separation of naturally occuring mix of U-238 and U-235 to a concentration of around 10 times the natural mix.
Fuel source will not be a problem.