www.fusecon.com Forum Index www.fusecon.com
Fusecon General Forums
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

DVD -> DivX
Goto page Previous  1, 2, 3
 
Post new topic   Reply to topic    www.fusecon.com Forum Index -> MindCandy DVD
View previous topic :: View next topic  
Author Message
Trixter
Guest





PostPosted: Fri Jan 10, 2003 8:29 am    Post subject: Reply with quote

Gldm wrote:

I'd like to argue the concept of an interlaced field update being a "discrete image". It's either the odd or even lines of a discrete image, not a full discrete image.


There's no arguing; you're just plain wrong. Smile Each field is a discrete image. Capture video of a live event, like a sporting event or newscast, and then seperate your 720x480@30 into 720x240@60. The easiest way to do this is with avisynth, using a script similar to the following:

Code:

AVISource("myfile.avi")
SeparateFields


Then load it into VirtualDub. What you will see as you single-step through it is that each field truly is a seperate image. The even fields are shifted up half a pixel, so it will "bounce" up and down as you single-step, but you really do get 60 different images per second in a 720x480 30i file.

Once you perform this test and see that you are wrong, I'm sure the rest of your arguments will fall into place Wink

Gldm wrote:

Thus you can't do things like sample a video at 60fps and output 30fps progressive


...because this is impossible, not because of some AVI limitation. Video hardware captures video, and video is 30i, not 30p. You are confusing computer video with broadcast video with the upcoming HDTV 480p -- none of which have anything to do with each other.

Gldm wrote:

so you never get a 60fps input. I'm wondering if there's a way around this, it's theoretically possible to write custom tools and codecs to support it, but it's too much work. You'd need to write an mpeg2 decoder to advertise 60fps, then output the half res images to the codec, then on playback have the player reconstruct the interlacing at 30hz... nah.


MPEG-2 decoder specs go as high as 59.94Hz and 60Hz, and you can already create MPEG-2 files that are true 720x480@60p. But although they are in the spec and are playable on computers, there are no DVD players that support them. When HDTV and 480p becomes mainstream in 2006, we will start seeing a new breed of DVD technology that will support 480p (along with 720p and 1080i, woohoo!)

Quote:

And yes I'd always rather see progressive frames than see comb artifacts on motion.


You must not be watching video on an actual TV, then -- because you don't see combing artifacts on an interlaced device. Watching interlaced video on computers sucks; they don't properly interpret fields. There is a calibration test in the Video Setup portion of the disc that has a screen that illustrates this problem; if you have a bad software player -- and most do -- there is no motion in a test pattern that should have motion. People whose idea of digital video is DivX and computers have been mislead about what video really is and how to work with it, and that's the #1 reason I have DivX -- for screwing with people's expectations. (Kind of like how I hate Microsoft for the same reason Smile

Quote:

Mmm I'd have to research on capturing 480p, but I'm skeptical there's NO conventional low cost hardware that does it. What about VGA->S-video then capture on any modern graphics card with capture? There'd be some conversion loss but I've captured 720x480 images @30fps without framedrops on a radeon and a gf4. 60fps 480p would likely be very difficult or expensive however.


S-video is interlaced. You cannot capture 60p from a 30i signal unless you are trying to capture half the vertical resolution.

Quote:

As for CPU use, a 720x480 divx5 scaled up and interpolated takes about 30% on my athlon XP 1800+. 60fps shouldn't take much more than double that number. It would be less feasible than mpeg2 in terms of cpu power but not unfeasible.


Following your numbers, a 720x480@60p video would require an Athlon 3600+, which obviously won't exist for at least a year.

Take a look at www.100fps.com if you'd like some examples of interlaced vs. progressive video, with pictures. If anything is unclear, let me know and I can try to help. I'm not trying to be difficult, but I do want people to have the right information.
Back to top
Tomcat
Guest





PostPosted: Sat Jan 11, 2003 7:13 am    Post subject: Reply with quote

I think educating people more about what they are downloading there makes sense.

I just searched two "special" Wink forums about "mindcandy" and there were no results. Whenever I find some posts about it, you can be sure I'll tell people whom they are harming here.
And, man, EUR 15 (Germany here) is not the world... I would've expected to pay more, like 25 or 30, and I would even have paid that (probably).

For myself - I just ordered 2 MindCandy DVDs. Smile

I want Volume 2!! Very Happy
Back to top
hhfql
Guest





PostPosted: Sat Jan 11, 2003 2:06 pm    Post subject: Reply with quote

Umm no, if you take the 720x240 fields out of an interlaced broadcast you get an image with an aspect ratio that is horribly wrong. Even disregarding that, a 720x240 scan is not a discrete 720x480 image. If you want to argue that subsets of an image are discrete images then you can slice an image up any number of ways you want, and have as many discrete images as there are pixels. Yes there's going to be motion differences between fields so maybe that's what you're counting, but the image you get is not designed to be viewed without the other half of the scan lines, it's not a full image. It's a half-res image. And when you see it in motion, you'll get lovely comb artifacts where the image has moved between fields, which on a low res device like a TV just look blurry.

And no, I don't have space for a TV, I just watch everything on my 21" monitor. Video cards with tv input are cheaper than tvs these days, and take up far less room. This is one of the reasons interlacing pisses me off. Some of the modern deinterlacers like dtv and ATI's built in hardware deinterlace are pretty good though.

If you were following my numbers I said 30% cpu for 720x480@30, that implies 60% cpu for 720x480@60. It wouldn't require a faster cpu at all.
Back to top
Trixter
Guest





PostPosted: Sat Jan 11, 2003 5:22 pm    Post subject: Reply with quote

Gldm wrote:

Even disregarding that, a 720x240 scan is not a discrete 720x480 image.


I never said it was -- because it isn't. At no point in the history of television has a single image been 480 lines down. TV signals have always been interlaced. The concept of "frames" was introduced with video recording devices in the 1950s, but the fundamental concept has been the same since television started: The sampling rate of the image is 60Hz (actually 59.94 since color was introduced).

You seem to be under the impression that the odd scanlines and the even scanlines of a frame are both part of the same image. They're not. They're sampled at different points in time. Each 720x240 field in a frame, odd or even, is a completely different image than the other one. Until you understand this, we can't continue our conversation. I urge you to perform the test I suggested earlier with the code sample, so that you can see this phenominon for yourself.

Quote:

And when you see it in motion, you'll get lovely comb artifacts where the image has moved between fields, which on a low res device like a TV just look blurry.


The latter part of that sentence doesn't make any sense because the video itself *is* a TV signal. Your misunderstanding of this issue is 100% tied to your computer video viewing experiences. If you connect a VGA card with TV output to a TV and play an interlaced .AVI, you are NOT seeing the proper output of that video unless you use some special player that comes with the video card. Perhaps this is why you think that interlaced video output to TV looks "blurry" (because the captured signal is not being sent back exactly as that signal).

If you lived in Illinois I would invite you to my house so that I could illustrate all this for you personally Smile What you really need to understand this is a prosumer or professional video acquisition card, so that signal in = signal out.

Yes, I was wrong with the numbers in my previous post Smile But even so, 60% CPU time on an Athlon 1800+ is not reasonable (I only have a 1GHz Athlon myself). <soapbox> I dislike DivX because it was misapplied -- it was originally designed as a low-bitrate codec and people are misusing it as some sort of replacement for MPEG-2, which it was not designed for and does not perform well without crazy amounts of CPU </soapbox>

Would you like me to capture a test clip of something so you can understand that each field is a completely seperate image from the other?
Back to top
hhfql
Guest





PostPosted: Sat Jan 11, 2003 6:41 pm    Post subject: Reply with quote

The problem is you're trying to convince me that fields are sampled at different points in time. I already know this. I already know there's motion differences between the fields. I'm arguingg that a field is NOT an image. It's half an image. It's half the resolution of the display, it's not a full image. It's not intended to be displayed by itself on the screen without the results of the previous or next field being shown as the rest of the screen.

I don't care what the signal is intended as, a hard edge that looks like this:

XXXXX
XXX
XXXXX
XXX
XXXXX

IS CRAP. At a high enough sample rate the last 2 pixels will just be blurred because either there isn't enough resolution (many older tvs), or the phosphor doesn't recover fast enough (some tvs). On a highend TV, like many of the HDTVs I'm seeing in stores now, or a monitor, it looks like a jagged comb edge instead of a solid edge. It's irritating. The constant flicker from the interlaced scan is irritating too. Most people don't notice it until they see it next to something that's progressive scan, either a monitor or a highend HDTV.

I hate more than divx, I hate the entire mpeg family. The algorithm is obsolete. Efforts to improve it are resulting in diminishing returns. Alot of the choices made in the design stage were due to the small memory of computers at the time it was being designed. There's better ways of doing things these days but everyone keeps insisting on using it. The reason people are using divx over mpeg2 is because mpeg2 doesn't allow for a very scalable bitrate range. It's impractical to store a movie in mpeg2 on cds or transfer it even over broadband. Divx is filling that niche for now until something better appears.
Back to top
Trixter
Guest





PostPosted: Sat Jan 11, 2003 10:11 pm    Post subject: Reply with quote

Gldm wrote:

I don't care what the signal is intended as, a hard edge that looks like this:

XXXXX
XXX
XXXXX
XXX
XXXXX

IS CRAP.


Well, that's not really fair -- you're looking at a frame of two interlaced fields, two moments in time. That is NOT a hard edge. Pick the odd or even field instead -- there's your hard edge. Yes, it sucks that each 720x240 field has slightly different phases, which means that you can't just view each field stretched out because the video will "bounce" up and down 30 times a second. I agree this blows. TVs aren't high-resolution devices, and neither is the signal: You get either a 240-line image 60 times a second with slightly different characteristics every other frame, or a 480-line image 30 times a second.

Quote:

At a high enough sample rate the last 2 pixels will just be blurred because either there isn't enough resolution (many older tvs), or the phosphor doesn't recover fast enough (some tvs).


I don't follow you, since the actual analog signal has an infinite (I'm not kidding) horizontal resolution. It is only *sampled* 720 times across during typical desktop video capture. If I'm misunderstanding you, please correct me.

Quote:

On a highend TV, like many of the HDTVs I'm seeing in stores now, or a monitor, it looks like a jagged comb edge instead of a solid edge.


That is because an interlaced signal is being displayed on a progressive device with absolutely no processing whatsoever. Progressive-scan televisions rarely go into a home theater setup without a line-doubler or similar device, so whoever set up the displays you've looked at missed a step.

Quote:

It's irritating. The constant flicker from the interlaced scan is irritating too. Most people don't notice it until they see it next to something that's progressive scan, either a monitor or a highend HDTV.


Most people don't notice it because most times it is not noticable. Smile TV phosphors are slow, you know that...

Quote:

I hate more than divx, I hate the entire mpeg family. The algorithm is obsolete. Efforts to improve it are resulting in diminishing returns. Alot of the choices made in the design stage were due to the small memory of computers at the time it was being designed. There's better ways of doing things these days but everyone keeps insisting on using it. The reason people are using divx over mpeg2 is because mpeg2 doesn't allow for a very scalable bitrate range. It's impractical to store a movie in mpeg2 on cds or transfer it even over broadband. Divx is filling that niche for now until something better appears.


I have some problems with what you wrote in the above paragraph:

- "The algorithm is obsolete." -- Considering that it is the delivery mechanism for HDTV, I wouldn't call that obsolete.

- "Alot of the choices made in the design stage were due to the small memory of computers at the time it was being designed." Computer limitations in 1991 had nothing to do with it -- MPEG-1 wasn't designed for computers at all, but rather embedded systems, which can be designed for anything. You are also inferring that better video algoritms exist with a tradeoff in memory, but which are those? Wavelets are the only technology I've found that beats MPEG-1/2 in terms of storage efficiency, but they don't require a lot of memory for decompression... If you know of an algorithm better than wavelets and is NOT derived from MPEG-1 and/or DCTs, I'd love to know about it. But don't mention H.26x because they're all derived from the same basic concept, as is MPEG-4's Advanced Simple Profile.

- "mpeg2 doesn't allow for a very scalable bitrate range" Then how did I pack over 2 hours of 720x480 @ 30 into 3.5 gigabytes, with bitrates as low as 128kbit and as high as 9200kbit, in a single file? Smile Maybe you instead meant to write that MPEG-2 wasn't meant for low-bitrate applications, and guess what: You're right, it wasn't. But don't fault MPEG-2 for that since it was never designed for low-bitrate applications.

The topic we were originally debating was your statement "No matter how you slice it, it's really only a 30 frame per second update. Unless you're losing half the frame data in interlacing, in which case it's still the equivalent of 30 frames per second." It is the second statement that I still disagree with, because I feel you are implying that DVD is only 30 images per second, which, when you view almost anything on the modern side of MindCandy, it most certainly is 60 images per second. Not 480-line images, but regular old NTSC 240-line images, since the dawn of NTSC.

Early on in the project Dan asked me if it was possible to master 720x480@60p. I told him that it was possible, but 1. although it is in the spec for MPEG-2, no DVD player would play it, and 2. I cannot capture 60p from video, only faked 30p (deinterlaced) or 30i. Maybe in five years when we all have 480p equipment it will be in the hands of the consumer. It's simply impossible to capture 480p @ 60Hz right now.
Back to top
MadenMann
Guest





PostPosted: Sun Aug 08, 2004 9:20 am    Post subject: Reply with quote

Ok, I skipped the last posts because this wasn't interesting, I just want to add my opinions:

copy protection
I'm happy there isn't one and I hope in future versions there won't be one. I don't like the country code because only the global players (no, not madwizards) are profiting from it. It's the same with copy protections. We pay for the DVD and the stuff on it, physical and non-physical. When the DVD becomes useless we have to buy a new one and pay for the non-physical stuff again. An own copy would be the better solution for the user.
I don't know how it is in the US, but at the moment here in Germany there are most (at least from the majors) music CDs copy protected in a way which don't let you play it on old players or car cd-players or on computers. Forbidding the use of tools which can copy copy-protected CDs/DVDs is stupid anyway because a copy protected CD/DVD has no copy protection if you can copy it somehow. Copy proctections "protect" the users from using what they've paid for.
=> Please don't use any copy protecions. Make quality DVDs for a fair price and people will buy it.

DivX
I like it, it gives me (sometimes) the possiblity to watch demos which I cannot run on my computers. I'm thanking robotriot for his amidemos (old site, new site) because it is better than nothing, but then again, 15 MB for a demo is often too less. (Ok, some years ago there wasn't the hw-power for more and there were fewer broadband connections.)
I've seen once a DivX version of 2nd Reality (don't know whether from the demo itself or from the MindcandyDVD), but it was such a low quality (some effects were totally killed) I stopped watching after some minutes and deleted it immediately.
pro and contra...

Double sided vs. double layer
I know back than double layer was too expensive but I'm very happy you've decided to make Version: 2 a double layer DVD. It's not the missing cover or the "please turn disk", I just don't like to see a disk which can get scratched very easy. You've writen somewhere, double layer has some MB less (about one demo), I think it is still better than double sided.

Prices
15 $/Euro are ok, don't believe Tomcat. ;) I don't think it is too much when you don't have many money, for all others it is a fair price. I also think if someone gets a DivX copy or such stuff (s)he will buy the DVD if (s)he likes it really. I think, people with such a copy who won't buy the DVD wouldn't be happy if they would buy the DVD w/o this "preview". So, do what you've done all the time and don't give a shit on copiers. It's the same with Tomcat's book, it only makes sense as a book, not as a pdf...

Btw, I own 2 Mindcandys, one original (mint) and a white boxed one which I use. Both bought by myself, one for use, one for security reasons (because of the double sided thingy). ;)
Back to top
Display posts from previous:   
Post new topic   Reply to topic    www.fusecon.com Forum Index -> MindCandy DVD All times are GMT - 8 Hours
Goto page Previous  1, 2, 3
Page 3 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group