Commercial Skip Upload Failed

Yes, thanks for the update. We kind of figured it was a capacity issue. Hopefully you guys have more than one server. The ACS is the #1 feature my wife likes, without it…well it’s just another DVR from her point of view and the lack of thumbnails on live TV makes it inferior to what she’s use to. So hopefully the ACS issues will be stable soon and she’ll be happy. I’m trying to win a battle over here.

We do… Just didn’t feel the need to get into a longwinded server architecture discussion :wink:

This sounds like a pretty good strategy. But I am certain this is not what is actually happening. If it were, it would take 2 days before I/we saw the “upload failed” message, but in reality I am seeing it within a few hours of the show being recorded (The past few days in particular).

I know the Tablo didn’t reboot, because I was watching other previously recorded shows with it at the time. My Tablo is hard wired to my gigabit switch, so I know there wasn’t any wifi issues. My Comcast service should be more than adequate bandwidth wise to support the upload, and while I may have only been web-surfing with a tablet while watching those previously recorded shows, I didn’t experience anything that would lead me to believe my network had any issues. My Tablo and internal drive are both new, I’m still in a 30 day evaluation period, so while there could be a problem with the drive, the probability of that seems quite low to me and should be discounted unless further evidence presents itself. Plus, with so many other people seeing the same problem, it seems to point to a software issue with the Tablo, either the firmware in the device, or on the server end.

So I guess I have to ask Tablo to revisit their code, to see what bug is causing this quick failure instead of performing these retries over the course of several nightly maintenances as suggested.

I agree, I know the unit isn’t rebooting or lost power, etc. and yet shows that should have retried and processed along with others that did work yet this didn’t happen.

I really appreciate you being open and upfront about it. Other companies would try and deflect. I can live with that feature not being perfect for a while.

Hey folks -

Just a follow-up to our earlier post regarding the issues some folks are experiencing with Automatic Commercial Skip.

Over the past few weeks we’ve seen a 3x increase in the volume of recordings being sent to the server for processing.

While we did anticipate and plan for a bump from the new Fall TV season, this happened to coincide with a huge number of brand new Tablo users coming online.

While we work to adjust our back-end systems to handle this demand spike, we will be temporarily placing NFL Football games on the ‘filtered’ list.

By not uploading files from these popular (and lengthy) football recordings, we hope to free up enough horsepower to successfully process the remaining recordings for everyone.

Once we’ve made the adjustments to account for the increased traffic, we’ll remove NFL football from the filtered list.

Unfortunately we don’t have a specific ETA to share for that, but we hope it will only be a week or two.

Stay tuned for further updates and please keep us posted on your ongoing experience with this feature in this thread. Your feedback during this open beta is invaluable!

2 Likes

I assume you guys are de-duplicating your work load taking a few samples of a show from a given market , processing it and then skipping processing for others who clearly have the exact same content short a couple pixels. In reality they wouldn’t even need to upload.

With 5 different recording qualities I wonder how that works. If I’m recording at 3 Mbps and the only previous upload of the file is 10 Mbps is there a one-to-one mapping? Or if in the middle of a show an emergency news report comes on indicating that my neighborhood is on fire.

Since I’m PDT time my one hour recording that ended at 11PM(PDT) completed CS at 1:05AM.

But the two that completed at 12:30AM and 1:30AM had pretty speedy CS processing. To bad the server doesn’t have some unsophisticated response back to the tablo software indicating that the AWS is too busy and that process needs to be delayed

In the same market the same report would come on for everyone in that market. i.e Same TV station. The quality doesn’t matter as you just flag the start and end time the commercials occupy. Once you have enough samples of a given show from a given market you tell the other Tablo’s with that same show on that same TV station the results. No need to upload. This would produce far faster results with less work. Hopefully they are already doing this. Football games would work the same way, on the same TV station.

Doesn’t Tablo transcode raw OTA mpeg into HLS H.264 files controlled by a m3u8 playlist.

Currently doesn’t CS work by processing the thumbnails and returning a updated playlist for that specific HLS file. So the playlists are the same between different recording qualities. And they are also the same if you start the recording early and end it late.

Ok, and how much would de-duplicating the work load help even if that’s all true? I suspect greatly.

You can probably find a number of posts about copy forward when the open beta was announced at the end of April.

Maybe it will come true. But if copy forward has failures what does a user do to obtain proper CS processing?

I’m not talking about Copy forward. I’m talking about reducing server load. They sample XX samples and if they agree they send that to the remaining requests. Looks like a duck, looks like a duck, looks like a duck, guess what it’s a duck.

That’s what copy forward is. The playlist from previous CS processing would be returned for future requests.

But I guess the user doesn’t need a retry when it’s bad.

Sounds like Medical billing or something. I like Deduplication of workloads.

If they process 10 samples or more out of say 100 requests for that TV station what are the odds it’s going to produce inaccurate results? If 9 agree and 1 doesn’t toss the 1 and wait for another sample, if it agrees then call it a good result. Adjust the sample threshold as needed to ensure the required precision.

There might actually be legal reasons why “sharing” across users cannot be used to avoid duplication of effort.

Doesn’t tablo’s CS processing use ML combined with DSP.

Not knowing which ML algorithms tablo is using, ML usually involves training data and the accuracy of the results is often dependent on the size of the data set (recording) processed. So reducing the data set doesn’t necessarily increase accuracy.

Or if you are driving tesla on ML autopilot does it think a fireplug is a small child about to enter the street and come to a stop.

Of course you might live in a neighborhood where tesla"s ML recognizes that there is a child at the curb and the child is the neighborhood terror who keyed the door and the telsa drives onto the curb and runs them over.

My point was if you determine within reason that a commercial is at a given location for a given station at a given time chances are everyone else watching that exact same show on that same channel is going to have the same results. Why process 2000 copies of the same thing only to get 99.9% the same answer as processing 10 copies.

And don’t get started on the speed of the radio waves causing microsecond differences. Ohh and if you do you better factor in the velocity factor of the Coax and the cable length.

What percentage of accuracy do you want for CS processing across all episodes to be: 75%, 80%, 85%, 90%, ??

ML usually involves training data and the accuracy of the results is often dependent on the size of the data set (recording) processed.

Face palm, I give up. It appears people just like to argue on any topic just for the sake of it…shocking.