Regarding querying data with Hive, this is the reason: When data is being written to HDFS (via Hive) a temporary file is kept open for appending records to it. If you take a look at your hive table path /hive/user/warehouse/<database><table>
you'll see _tmp*
files. These temporary files are kept 'open' for appending records based on the settings you've in Hadoop FS destination > Output Files:

With default settings, the files will be kept open indefinitely or if the file did not receive records for an hour, then the file will be closed and renamed to a valid name. Ensure you modify these settings based on how small/large files you want created in HDFS. Typical recommendation is at least a block size. Don't leave it as 0. If you update the setting as follows:

The file will be kept open for appending records until any one of the conditions is met. The first condition met will trigger the file to be closed, and hence renamed from _tmp*
to sdc_*
.
When querying with Hive, each query runs a MapReduce task. A MapReduce job ignores files that are prefixed with an underscore, hence in your case your query doesn't return anything. On stopping the pipeline, the files are closed and your query returns data. Ensure you modify the Hadoop FS settings based on how often you want to query data.
Note: Hive is not meant for real-time data access. If you want find the need to query data in real-time, use HBase or Kudu or other storage options that allow this.