LAS VEGAS (KLAS) — Dr. Travis Taylor was part of the UAP Task Force team that prepared a report for Congress about the UFO mystery. The Task Force had access to sensor data, unredacted files, witness accounts and classified information that may never be seen by the public will ever be viewed by the public, and how they were able to rule out certain prosaic explanations for unknown objects encountered by the US military.
George Knapp: Can you talk about your work as chief scientist for the task force? What did you do? Did you look at material that came in? Videos to analyze? Things of that sort.
Dr. Travis Taylor: Yeah. So there were a lot of different datasets that came in that we looked at, tried to figure out what they were. There were some that turned out to be weather balloons, for example. In fact, there were several. But there were many that there is not enough data, and sensor data — and we had a lot of sensor data on some of them — that we couldn’t determine what they are. I mean, if it’s our near peers doing it, that’s scary. But at the same time, we also never found any evidence that it was our near peers doing it, right?
Knapp: You were instrumental in writing the report that was demanded by Congress that was made public … 144 cases, right?
Taylor: I was one of the people who wrote words that are in that … it was a team, and we work diligently on it for a long time. We started out with everything we could think of and the kitchen sink and put in it. And then we realized this go into a public audience into the Congress and this thing, so we had to write it in a way that they would understand and get the point. And the point, the nine pages that came out, there’s the golden nugget in those nine pages, if people would just pay attention. There were 144 cases we studied in a period of about three years. And in that timeframe of just studying 144 cases, these are only cases that were from credible military sources, right, and not from just MUFON, or you know, because they might all happen over the world, but we chose sources that we knew had a chain of custody of the data. And out of those 144, 143 of them, we still couldn’t figure out what they were, where they came from, and what their intent was.
Knapp: So by the time it gets to 144, you’ve already weeded things out that are easily explainable. Those are legitimate mysteries. You see social media, debunkers, armchair experts who take some of the videos that were studied by the task force that we helped put out, they explain them away in a variety of different ways. It makes the military look like idiots … they have to be idiots to not understand what these guys think they know.
Taylor: Yeah, and that’s okay. It’s part of the peer review process in science, engineering. But here’s the thing. A skeptic and a debunker, they take their results by starting from the beginning, by knowing that the result is going to be this. So they take the analysis to lead them in the direction to do that. I’m not here to, to believe or disbelieve, I’m here to find out what’s going on and do analysis on the data we have. And the data that we had, in many cases, there was more of it than what the general public has and what was released. And so when we say that we had a thing that was from multiple sensors, and it told us multiple things. And we also had eyewitness accounts, audio information, and so on, then you put all that together, it’s a much bigger picture than just saying, ‘Oh, we’re not going to listen or look at any of that, we’re only going to look at the what’s on this few seconds of video. And we can tell you for sure, from that few seconds of video what it is.’ I don’t think there’s a person on the planet that can really do that and do it honestly,
Knapp: You’re an optical physicist among your many degrees, right? So you could look at an image and tell if it’s a flare or a bird or something like that?
Taylor: Well, I would like to hope so. I can certainly look at it with the various analysis tools and tell you that if it is something as mundane as that, that’s usually easy to determine. There are some things like camera effects that you may see. Camera artifacts will sometimes confuse someone and make them think they’ve captured something. And sometimes people will use the camera artifact as ‘Oh, that’s just a flare in the camera,’ or, ‘Oh, that’s just a bokeh effect,’ or ‘Oh, that’s this or that.’ And without really doing the analysis, you can’t just say, ‘Oh, it’s birds. Don’t worry about looking at it.’
Knapp: People who are describing it as bokeh don’t have other data sources that you have.
Taylor: That’s correct. And that’s one of the things that people have to accept that there is sometimes more data. And you say, ‘Well, then you show it to us.’ We never had the intent to classify things to keep people from knowing it. Things are classified because of the sensor or the platform that it was taken from. We don’t want to give away how we do things to our near peers. If we have a magic widget that we can say look into the Kremlin. We certainly wouldn’t want them to know we had that magic widget. I’m not saying we do. I’m just giving that as a hypothetical example. So we certainly, if we’ve got platforms that captured other data, and it’s data from a sensor that nobody knows exists, we’re not going to release that data because then you know, the sensor exists.